►
From YouTube: Kubernetes WG IoT Edge 20220810
Description
August 10, 2022 ,eeting of the CNCF IoT Edge Working Group. Recap of interesting Edge related talks from SCALE x19 Conference.
A
Word:
hi
welcome
to
the
August
10th
meeting
of
the
cncf
iot
Edge
working
group
on
today's
agenda,
I'm
going
to
be
talking
a
little
bit
on
a
recap
of
a
recent
open
source
conference
that
had
three
good
sessions
that
addressed
running
things
at
Edge
and
I.
Believe
the
agenda
also
includes
a
discussion
of
the
white
paper.
A
This
group
is
working
on
and
we
may
have
time
for
user
nominated
additional
items
or
just
free
form,
birds
of
a
feather
discussion,
so
I
think
all
of
you
were
here
when
I
posted
that
link
in
the
chat
to
the
agenda
notes
document
I'm
away
from
home.
So
a
limited
little
bit
debilitated
on
presentation,
so
maybe
I'm
just
going
to
open
the
notes
and
read
it
and
I:
don't
really
have
a
slide
deck
or
anything
to
go.
B
A
But
if
you
look
on
the
agenda
note
stock,
there
are
links
to
get
a
lot
of
these,
including
the
video
recordings
of
these
sessions.
I'm
talking
about
so
this
conference
is
called
the
scale
conference
and
it's
the
largest
independent,
open
source
conference
in
North,
America,
meaning
independent
means
it's
just
kind
of
ad
hoc,
local
organizers,
That
Grew
over
time.
This
recent
Edition
is
the
19th.
A
It
literally
started
in
somebody's
dorm
room
at
on
the
campus
of
USC
19
years
ago,
and
you
know
with
under
20
people,
but
it
has
gained
quite
a
bit
of
momentum
before
coven
hit.
It
was
running
4,
500
attendees,
and
this
last
one
they
didn't
know
what
to
expect
coming
out
of
code
of
it.
So
they
moved
it
from
a
Convention
Center
to
a
hotel
and
the
registrations
hit
2500,
which
was
the
cap
for
how
much
that
could
hold.
So
it
was
still
a
pretty
good
attendance
and
independent
means.
A
You
know
it's
not
sponsored
by
a
company
or
this
a
an
open
source
Foundation.
What
are
the
aspects
of
that?
Is
it's
a
little
informal
with
volunteers
running
even
the
AV
and
the
Wi-Fi,
but
it's
also
inexpensive.
That
brings
in
a
really
diverse
audience,
so
you
can
manage
to
attend
there
with
a
discount
for
maybe
45
or
50
dollars
pretty
easily
and
I.
Don't
know
it's
really
pretty
much
my
favorite
open
source
conference
so
getting
to
what's
of
interest
to
this
group.
There
were
three
talks.
A
The
first
one
was
on
the
podman
container
engine,
so
this
talk
was
by
Dan
Walsh
of
red
hat,
who
I
think
it's
fair
to
say
is
the
architect
of
podman,
or
at
least
I'd
consider
it
he's
been
talking
about
it
for
since
I
ever
heard
of
it,
and
his
talk
was
on
recently
added
features,
plus
things
that
are
on
the
roadmap
for
the
future,
and
there
was
some
were
some
really
interesting.
A
A
What
this
means
is
that
at
an
edge,
Leaf
node,
you
don't
have
to
install
a
full
kubernetes
or
even
turn
it
into
a
kubernetes
worker
cluster
node,
like
some
of
the
open
source
Solutions,
do
instead
it's
sufficient
to
just
install
the
podman
engine
and
in
many
aspects
you
can
manage
this
in
a
way
similar
to
kubernetes
I
have
kubernetes
it's
designed
to
take
a
kubernetes,
pods
spec
in
yaml
and
run
it
so
the
way
that
this
is
done.
It
does
support
things
like
volumes.
A
So
that's
the
first
Edge
related
feature.
The
second
is
support
for
wasm
based
containers
using
ocai
packaged
runtime,
so
I
know
in
this
group.
We've
talked
before
about
webassembly
and
it
looks
like
that's.
An
upcoming
podman
feature
to
go
package.
Your
webassembly
in
an
oci
container
and
have
the
podman
engine
run
it
for
you
at
an
edge
location.
A
There
are
also
features
for
auto
update
of
podman
when
it's
installed
at
the
edge.
So
you
can,
you
know
if
there
are
cves
exposed
in
theory,
the
thing
is
capable
of
keeping
itself
patched
and
the
final
thing
I
encourage
you
to
watch
the
full
presentation
by
Dan,
because
it
was
very
good.
But
the
final
thing
I'm
going
to
recap
here,
is
that
the
health
metrics
of
podman
seemed
very
good
Dan
had
a
slide
of
what
I'm
calling
hard
metrics.
You
know.
A
Soft
metrics
to
me
are
things
like
GitHub
stars
that
I
don't
know
I.
Whenever
I
see
a
project
touting
those,
since
you
can
award
those
to
yourself,
I'm
extremely
skeptical,
but
the
number
of
committers
the
response
time
to
PR
is
being
submitted
and
PR
is
being
merged.
Those
kinds
of
things
are
kind
of
tough
to
game
and
the
podman
specs
that
he
showed
looked
really
good.
A
He
mentioned
that
when
Docker
changed
the
licensing
for
Dr
desktop
that
had
a
big
impact
on
podman
adoption,
so
that
that
the
speculation
is
that
that
might
have
been
a
big
influence
on
why
the
podman
metrics
kind
of
went
on
turbocharged
moving
on
I'm
going
to
talk
about
a
second
talk,
which
was
on
something
I,
hadn't
heard
of
until
I
saw
this
talk
called
We're
camp
and
we
are
can
was
spun
off
as
I
understand
it
from
the
anarchs
project.
The
anarchs
project
is
a
way
for
running
trust.
A
What
they
call
trusted,
executables
on
an
appropriate
CPU
and
recent
versions
of
AMD
and
Intel
CPU
chips
have
a
mechanism
of
running
VMS
in
an
encrypted
form
in
memory,
so
that
in
theory,
it's
something
like
a
TBM
chip,
but
the
entire
VM
is
kept
secure
so
that
someone
really
can't
hack
in
and
see
what's
going
on
in
there,
and
that
idea
could
be
potentially
very
important
for
a
lot
of
edge
applications
where
they're
unattended
lack
physical
security,
the
TPM
module
can
store
secrets
and
things
securely
to
some
degree.
A
But
this
anarchs
project
is
a
way
to
run
whole
VMS
in
this
mode
and
we're
can
spot
off
if
I
understood
the
talk
correctly
as
a
mechanism
to
make
these
secure,
VMS
bootable.
So
where
can
was
designed
to
be
a
packaged
unit
where
you
can
run
a
containerized
app
with
the
kernel
to
go
with
it
and
have
this
bare
metal
at
Edge
boot
off
of
an
oci
container
that
contains
both
the
kernel
and
the
traditional
app?
And
it
was
interesting.
I
didn't
realize
this.
A
A
But
if
you
wanted
to
The
Container
gets
larger,
but
it
is
possible
to
build
yourself
an
oci
image
that
has
both
the
kernel
plus
an
application
that
runs
on
top
of
that
kernel,
and
we're
can
does
that
and
they've
got
to
this
technology
working,
it's
published
with
an
Apache
license
on
gitlab
and
I'll,
get
to
why
it's
on
gitlab,
instead
of
perhaps
a
bit
more
popular
GitHub
in
a
moment,
written
in
Rust,
and
you
can
boot
up
a
bare
metal
device,
pixie
and
a
number
of
other
forms.
A
It
runs
entirely
in
memory
not
on
disk,
so
that
that's
interesting
and
it
might
have
some
implications.
But
because
of
this,
it's
really
resistant
to
things
like
bricking.
You
know
if
you
wanted
to
release
a
new
kernel
and
you
had
a
watchdog
and
the
new
one
didn't
come
up.
If
you
reconfigured
to
boot
off
the
older
version,
you'd
essentially
have
a
form
of
atomic
update
of
the
OS
packaged
with
your
application.
A
So
I
found
that
kind
of
interesting,
given
that
it
runs
entirely
in
memory,
it
might
conceivably
take
more
memory.
The
speaker
said
it
is
possible
to
use
a
disk.
That's
there
as
swap
for
this
thing,
so
that
you
know
you
could
you
maybe
don't
have
to
worry
about
that
memory
limitation?
If
you
do
need
to
persist,
State,
you
could
mount
volumes
or
state
or
configuration
of
network,
and
other
things
has
persisted
in
EFI
variables.
A
A
Looking
at
the
notes,
I
dropped
in
here
I
think
I've
covered
most
of
this,
oh
because
it
runs
in
this
encrypted
mode,
given
the
appropriate
hardware,
and
apparently
the
appropriate
Hardware
takes
a
particular
instruction
set
enhancement
that
they
have
acronyms
different
ones.
For
AMD
versus
Intel
and
I
can't
recall
what
they
were,
but
they're
in
that
recorded
presentation.
If
you
want
to
look
them
up-
and
it's
only
found
on
very
recent
ones,
but
speculation
is
that
most
of
these
instruction
enhancements,
if
they
follow
history,
will
go
down
to
the
Lesser
CPUs.
A
Finally,
the
saving
the
best
for
last,
the
presentation
I
recommend
looking
at
is
by
Frederick
Debian
who's
on
the
call
here,
and
he
gave
a
talk
entitled.
Should
you
bring
kubernetes
on
your
edge
road
trap
trip?
And
you
know
it
was
a
very
broad
coverage
of
various
open
source
platforms
available
for
ads,
both
thick
and
thin,
some
optimized
for
different
use
cases
compared
to
others.
There
was
a
slide
with
a
very
good
table
that
I
cut
and
pasted
into
the
notes
covering
I.
A
Don't
know,
I'll
just
read
a
few
out
of
the
list
and
you
know
AWS
outputs,
Eclipse
fog,
I
o
fog
and
fog,
o5
k3s,
flash
Cube,
Edge
and
more,
and
this
table
lists
whether
they're,
Edge
only
or
Cloud
managed
what
their
focus
is.
Etc
and
I
do
have
a
link
to
the
recording
on
YouTube
I'll
caution.
You
on
the
recording
you
jump
to
a
particular
place
because
scale
being
kind
of
community
managed
they
just
leave
the
camera
more
or
less
unattended
running
all
day
for
per
conference
room.
A
So
when
you
find
the
recording
on
YouTube
Every
talk
in
that
same
room
is
on
the
same
record,
very
long
recording.
So
you
have
to
go
and
jump
to
the
part
that
you're
interested
in,
but
Frederick
I'll
turn
it
over
to
you
for
a
few
remarks
and
you
know
go
ahead
and
go
ahead
and
break
if
you
like-
and
maybe
you
can
say
a
few
more
words
about
your
experience
with
the
conference-
don't
be
afraid
to
offend
me.
C
The
the
location
sucked
proportionally
so
to
speak,
but
that's
not
the
usual
one.
I
mean
I,
know
that
they
had
to
move
it
and
and
certainly
I'm,
looking
forward
to
get
back
next
year,
because
in
in
my
opinion,
it's
been
a
great
event
well
run
with
lots
of.
You
know
the
kind
of
audience
I
need
to
reach
to
in
my
in
my
particular
position
at
the
foundation,
so
you
know
only
good
things
to
to
say
about
it.
C
Apart
from
the
the
fact
that
we
were
at
a
hotel,
literally
besides
the
the
besides
the
airport,
not
the
best
physical
location,
but
for
the
rest
as
an
event,
it's
been
it's
been
great
and
you
know
I
love
the
community
oriented
concept.
You
know
game
night
and
the
the
fact
that
people
normally
come
there
with
kids
and
that
normally
they
would
have
been
under
18
speaking
track
as
well.
C
So
all
of
those
things
are
are
certainly
good
and
I
must
say:
I
I
attended
the
podman
talk,
Steve
that
that
you
talked
about,
and
it's
certainly
a
great
one
and
and
certainly
piqued
my
interest
in
in
trying
it
out.
Given
the
the
limitations.
Now
around
the
docker
desktop
and
technology,
so
it's
it's
great
to
have
a
full
open
source
alternative.
C
Let's
say,
and
in
the
case
of
the
the
other
attack
I
was
wondering
you
know
about
the
reiken:
do
how
would
you
contrast
the
approach
or-
or
you
know,
the
kind
of
target
market
I
know
this.
This
would
have
been
a
question
for
the
speaker,
but
since
you
attended-
and
you
know
broadly
the
market,
how
would
you
construct
contrast
their
approach
compared,
let's
say
to
evos
at
telefedge.
A
You
know
I,
I,
think
they're,
potentially
very
similar
and
I'm,
not
sure
the
we're
can
really
literally
going
to
this
conference
was
the
first
I've
heard
of
it
and
I
believe
that
it's
extremely
new
and
I
think
that
for
now,
anyway,
the
Evo
s
does
not
use
that
at
least
in
what
I've
seen
before
I.
Don't
think
it
uses
that
secure
compute
platform
I,
don't
know
that
there's
any
impediment
that
would
stop
them
from
using
it.
A
But
I
think
that
the
Weir
can
was
a
spin-off
of
anarchs,
which
was
a
project
specifically
aligned
with
the
secure
compute
and
what
they
wanted
was
a
way
once
they
had
this
secure
compute.
In
other
words,
an
ability
to
run
VMS.
A
Encrypted
VMS,
you
know
I
I
am
new
to
this,
but
what
encrypted
VMS
means
I
believe
is
that,
even
if
you
were
a
hardware
hacker-
and
you
could
somehow
tamper
with
the
hardware
and
get
into
the
live
memory
image
that
what
you'd
find
there
isn't
anything
readable,
you.
A
Could
you
know
you
could
run
one
of
these
theoretically
at
some
completely
unintended
location
where
somebody
goes
in
there
and
cuts
through
the
lid
with
a
saw
and
starts
putting
logic
probes
in
there
to
try
to
deduce
to
try
to
intercept
the
CPU
to
memory
path?
And
maybe
you
still
can't
get
anything
valuable,
so
I
believe
that
the
Weir
can
started
with
the
orientation
of
them
just
needing
a
way
to
boot
into
these
encrypted
VMS
on
bare
metal,
and
it
was
made
to
serve
that
purpose.
A
It's
a
little
different
from
Evo
s
in
that
it
runs
I.
Think
as
an
architectural
limitation
entirely
in
memory
and
I.
Don't
think
Eve
OS
has
that
limitation
at
all,
and
there
are
ways
that
people
contend,
that
you
achieve
your
security
with
the
TPM
module
in
conjunction
with
the
UA
UEFI
bios,
and
you
know
the
right
OS
so
I'm
the
wrong
person
to
give
it
a
grade
on
whether
this.
C
C
Was
more
kind
of
feature
comparison
that
I
would
say
that
not
a
quality
comparison
because
yeah
you
would
have
to
to
implement
the
thing
in
the
real
world,
but
thank
you
for
those
insights.
I.
Think
one
of
the
potential
differentiators
is
that
since
evos
is
relying
on
an
hypervisor,
you
know
a
level
one
one
it
can.
It
can
run
VMS
and
containers.
So
that's
probably
one
possible
differentiation,
but
and
anyway
you
know
it's
been
it's
been,
it's
got
meeting
King
and
that's
the
whole
point
of
presentation.
A
I
almost
think
that
maybe
the
two
might,
as
often
happens
in
open
source,
still
adopt
useful
features
out
of
the
other
one.
So
this
really
is
the
first
I
ever
heard
of
the
ability
to
package
to
Kernel
in
an
oci
container.
But
now
that
I
know
it's
there,
I
don't
see
why
this
couldn't
be
done
over
an
evos
and
it's
a
cool
idea
because,
because
as
use
of
these
Registries
become
ubiquitous,
there's
there's
a
lot
of
people
who
are
extending
these
oci
containers
to
put
web
assemblies
in
there.
A
For
example,
or
there
are
a
number
of
container
registry
open
source
projects
that
got
enhanced
to
have
them
also
host
Helm
charts,
for
example,
yeah.
A
So
it's
almost
becoming
the
Jack
of
all
trades
if
you
will
for
Edge,
particularly
if
you
want
to
run
air
gapped,
but
actually,
even
if
you
don't
require
air
gap,
but
you
don't
have
very
good
Upstream
internet
connectivity
there
or
if
you
wanted
to
impose
a
governance
layer
having
this
host
of
generic
binaries
for
various
purposes,
is
something
that
I
think
people
finally
want
to
do
when
they
go
at
Edge
at
scale.
Yeah
I.
A
Just
packaging,
the
os's
in
oci
I,
think,
is
a
concept.
This
is
the
first
I've
ever
heard
of
it
and
even
Legacy
boot
Technologies
like
pixie,
or
something
if
you
once
I
heard
of
that
idea,
I'm
going
geez.
Couldn't
somebody
do
that
and
they
probably
can
yeah
yes
and
by
the
way
I
said
what
I
was
talking
about.
The
Weir
can
that
I'd
mentioned
why
it's
on
gitlab,
but
it
turned
out
that
gitlab
has
a
facility
to
host
things
using
it's
willing
to
do.
A
A
Pretty
cool
and.
C
C
C
In
any
case,
I
I
had
a
great
time
and,
and
it
was
nice
to
see
you
in
person
that
was
the
first,
the
first
time
that
we
were
actually
seeing
each
other
and,
as
I
said,
I'm
I'm,
looking
forward
to
to
submit
a
paper
for
next
year
and
I
discussed
this
with
my
colleague
at
and
my
colleague
Augustine
at
the
on
Euro
working
group,
and
we
are
certainly
looking
at
having
a
boot
and
making
a
bigger
Splash
even
for
for
next
year.
So
thank
you
for
introducing
me
to
the
event.
A
Okay,
so
the
next
I
am
on
the
agenda
that
I
put
there
was
I
was
aspiring
to
do
an
actual
demo
of
Argo
tunnels
at
Edge,
but
I'm,
traveling
and
I
found
that
my
VPN
connection
into
my
house
is
down.
So
I
can't
do
a
live
demo,
so
I'm
just
going
to
talk
about
it,
but
these
Argo
tunnels
are
this
interesting
concept
where
it's
a
way
to
publish
something.
I
started
doing
this
just
for
kind
of
home
lab
things
like,
but
it
makes
a
lot
of
sense
at
an
edge
as
well.
A
The
typical
thing
you
do
if
you're
running
at
home
and
you
wanted
to
host
a
web
page
or
a
dashboard,
or
something
like
that,
would
be
to
use
navs
and
you
know
open
up
open
a
exposed
Port
on
the
public
internet
that
your
firewall
forwards
through
to
whatever
app
you've
configured.
A
That
is
monitoring
air
traffic
in
Los
Angeles
running
on
a
pie,
three
that
only
has
one
gig
of
RAM
and
I.
The
tunnel
worked
fine
for
that
it
runs
a
little
Agent
app
that
goes
and
opens
up
a
network
forwarding
to
a
local
host
port
on.
You
know
the
destination
you're
trying
to
expose
routes
it
to
cloudflare's
CDN
locations.
It
I,
don't
believe
you
get
to
choose
I!
Think
they
just
pick
one
for
you.
But
cloudflare
has
these
points
of
presence
all
over
and
it
exposes
it
on
cloudflare,
so
somebody
can
be.
A
You
can
then
create
a
DNS
entry
that
routes
to
the
cloudflare,
IP
and
people
connect
to
that
IP
and
port
and
cloudflare
forwards
it
down
to
your
Edge
device,
and
it
means
that
you
have
successfully
exposed
some
app
or
service
on
the
public
internet
and
you
haven't
even
divulged
your
IP.
A
If
you
go
at
it,
if
somebody
looks
up
this
DNS
name,
it's
terminated
at
cloudflare
and
if
you
want,
they
will
actually
provide
an
SSL
server
for
it
or
a
TLS
service
so
that
you
don't
necessarily
have
to
go
with
something
like
let's
encrypt,
to
put
a
certificate
on
it,
and
in
fact,
if
you're
on
a
low
resource
device,
it
could
run
relatively
safely
just
using
HTTP.
And
you
know
if
you're
really
challenged
cpu-wise.
That
might
be
a
benefit
because
it
could
have
trouble.
A
Keeping
up
with
you
know
the
additional
workload
of
a
TLS
connection
and
I
was
able
to
get
it
to
expose
out
of
a
bare
metal,
Raspberry,
Pi
I've,
gotten
it
to
expose
out
of
VMS
and
I
believe
you
can
actually
even
use
it
to
front
for
a
kubernetes
load.
Balance,
service
or
I
did
not
get
so
far
as
to
front
a
kubernetes
Ingress,
but
I
believe
it
should
work.
A
But
I
didn't
get
just
didn't
get
to
the
point
of
successfully
trying
it
yet,
but
I
did
do
kubernetes
load
balance
services
and
get
it
to
expose
those
at
a
certain
point.
Cloudflare
isn't
doing
this
as
a
charity
they
charge
for
it.
However,
you
can
get
a
free
account
and
I
believe
it
gives
you
the
free
tier
lets
you
do
about
five
of
these
tunnels
or
something
you
better.
Look
it
up,
I'm,
not
sure
on
the
exact
count,
but
you
on
the
free
tier
you
can
get
several.
A
Let's
put
it
that
way
and
I
think
the
paid
tier
adds
some
features
as
well,
where
you
can
have
cloudflare
hosting
user
authentication.
You
know
where,
instead
of
it
being
just
wide
open
to
everybody
who
happens
to
have
the
URL,
you
could
cloudflare
would
run
an
authentication
process
with
a
login
or
something
for
you
and
you'd
have
to
pay
the
money
to
get
up
to
that
level,
but
I'd
encourage
you.
A
A
Now
there
is
something
if
people
are
familiar
with
the
open
source,
Community
there's
a
guy
named
Alex,
Ellis,
who's,
very
big
in
so-called
event,
driven
and
serverless,
who
came
up
with
a
project
called
inlets.
A
That
I
think
is
a
similar
concept
where
you
run
a
lightweight
agent
on
your
Edge
device
and
pop
it
out
somewhere
on
the
public
internet.
That
is
a
little
bit
higher
horsepower.
If
you
will
in
terms
of
resource
and
inlets.
The
compare
there
is
that
Alex
isn't
doing
it
as
a
charity,
but
it's
very
inexpensive
and
he
doesn't
have
this
tied
in
with
a
particular
service
provider,
so
it
can
run
the
other
end.
The
cloud
hosted
edge
of
your
tunnel
and
all
the
public
clouds.
You
you
choose
to
stand
up
the
public
exposed
agent.
A
So
it
opens
a
tunnel
from
your
Edge
device
up
to
the
agent
running
in
a
public
cloud,
and
the
public
Cloud
would
expose
a
public
I.T
that
people
would
utilize
to
get
to
to
the
edge,
hosted
service
and
I.
Think
I
think
when
I
put
the
things
in
the
the
notes.
I
didn't
have
time
before
the
meeting
to
put
a
note
there,
but
if
you
Google
for
inlets
and
tunnel
I,
think
you'll
find
it
and
I
think
that
one
doesn't
have
a
free
tier,
but
it's
like
five
dollars
a
month,
so
it
it.
A
A
D
Sure
I
can
talk
a
little
bit
about
the
white
paper,
so
I
think
our
last
meeting
we
discussed
kind
of
next
steps
on
getting
some
writers
involved,
I
posted
on
the
slack
a
couple
days
ago
to
see
if
folks
are
interested
in
participating
as
writers
and
haven't
heard
a
ton
of
responses
there.
D
But
we
are
still
having
some
conversation
threads
on
the
white
paper
itself,
so
I
think
a
big
step
that
we
need
to
solidify
is
coming
up
with
those
principles
and
defining
what
are
the
educative
principles
and
then
there's
been
some
discussions
around
what
areas
of
the
edge
are
we
going
to
focus
on
the
near
Edge,
the
far
Edge,
the
tiny
Edge
Park
Edge,
and
now
it
seems
to
be
a
kind
of
a
network
Focus
based
on
the
expertise
of
the
draft,
but
that
we
do
want
to
aim
for
still
far.
D
Edge
is
kind
of
the
discussion.
That's
going
now
so
really
just
need
to
transition
into
defining
those
principles,
and
maybe
we
need
a
separate
meeting
to
chat
about
that.
What
those
principles
are
or
keep
that
in
the
document
but
anyone's
free
to
add
onto
that
Brandon
I.
Don't
know
if
you
have
any
updates
or
thoughts
on
it
as
well.
E
E
I
would
I
would
tend
to
agree
with
that,
but
would
want
to
get
input
from
other
contributors,
reviewers
and
authors,
if
that's
the
logical
division
that
we
should
be
making
and
then,
as
to
your
other
point,
around
principles
tried
to
get
started
with
some
of
those
listed
below
in
the
outline,
but
I
think
that's
an
important
piece
that
we
need
to
confirm
and
also
add
in
anything.
That's
that
that's
missing
you
know.
E
Maybe
an
open
brainstorm
with
a
group
or
a
separate
call
would
be,
would
be
good
for
that,
and
just
this
is
a
important
Milestone
I
think
for
us
around
this
topic.
So
I
just
wouldn't
want
to
miss
any
any
principles
that
we
think
could
and
should
be
covered.
D
Yeah
I
think
that's
a
good
point
and
also
one
thing
that
we
kind
of
skipped
over
that's
an
important
announcement.
But
as
part
of
this
white
paper,
we
submitted
a
talk
to
kubecon
North
America
to
present
kind
of
our
progress
on
the
white
paper,
whether
it's
completed
at
that
point,
at
least
in
the
least.
We
should
have
those
principles
really
nailed
down
and
that
talk
was
accepted
and
so,
which
is
very
exciting.
D
But
that
also
means
we
kind
of
gives
us
that
energy
to
kind
of
nail
down
those
principles
and
I
can,
if
someone
hasn't
I,
can
go
ahead
and
Link.
The
document
here
for
folks.
B
D
E
One
thing
I'd,
maybe
suggest
Kate
is,
as
we
identify
the
principles
to
try
to
come
up
with
a
somewhat
concise
name
for
each
one,
because
right
now,
they're
listed
as
sentences
of
varying
lengths.
Maybe
we
try
to
adopt
a
standard
of
a
principal
name
that
is
a
bit
shorter.
That's
maybe
encompassing
of
what
the
principle
entails
with
then
a
follow-up
sentence
or
two
behind
it
to
detail
that
principle.
E
Just
thinking
in
terms
of
consumability
from
a
web
page
or
a
GitHub
page
or
a
PDF
I
think
we
want
those
principles
to
stand
out
a
little
bit
more
and
just
be
the
the
starter
of
recognition.
That
then,
is
followed
up
with
more
information.
D
Yeah
I
agree
with
that,
so
on
that
note
we
can
kind
of
work
our
way
down
down
them.
For
folks
who
have
this
open,
the
first
principle
is
enter
your
applications
that
span
from
The
Edge
to
the
cloud
so
to
me
and
Brandon
I
know
you
sketch
these
out
so
might
be
best
of
you.
Follow
up.
I
can
give
my
interpretations
of
these,
which
might
be
helpful
for
kind
of
nailing
them
down.
D
But
to
me
this
sounds
like
kind
of
creating
a
Continuum
of
the
edge
so
that
your
same
application
can
be
deployed
in
all
these
different
zones,
whether
that's
the
near
Edge,
far
Edge
or
the
cloud.
Is
that
what
you
would
say
this
first
bullet
point
is.
B
But
it's
maybe
not
just
portable,
it's,
maybe
it's!
The
parts
of
the
application
needs
to
leave
from
different,
different
tiers
right,
so
some
some
of
it
it's
on
the
air,
some
of
it's,
the
cloud
yeah.
D
Okay,
so
in
this
we're
talking
about
the
application,
as
it
can
span
multiple
different,
so
I'm
I'm
curious,
then,
should
portable
be
a
separate
principle
like.
Should
you
be
able
to
Interchange
what
part
of
the
spectrum
of
computing
each
part
of
your
applications
on,
or
is
that
not
what
we
consider
an
important
principle?
D
C
It's
Tricky
there,
because
I
I
can
see
a
a
theoretical
interest
in
it,
but
then
the
the
the
problem
with
that
is
that,
of
course,
the
fatter
away
you
get
from
the
cloud
into
the
wild.
The
Lesser
is
your
elasticity
and,
of
course,
that
that
means
that
you
may
have
typically,
typically
in
in
what
in
our
own
messaging,
we
say
that
the
the
edges
are
pretty
iterate
in
this
environment.
So
if
you
want
things
to
be
really
really
portable,
you,
you
get
a
level
of
complexity
there
because
of
that
right.
C
If,
if
you
target-
let's
say
small
Edge
nodes
with
AI
accelerators,
you
know
very
far
from
the
data
center.
Of
course
you
know
whatever
container
or
microservice
or
whatever
you
will
deploy.
If
you
take
advantage
of
that
and
then
suddenly
you
run
in
a
completely
different
environment
on
a
different
processor
architecture
than
you
know,
you
won't
be
as
optimized
so
to
a
point.
Yes,
you
want
to
shuffle
things
and
things
to
be
fairly
portable,
but
to
what
degree,
there's
there's
a
very
big
price
to
pay.
D
So
I
I
just
I'm,
writing
these
in
the
bullets,
but
I
think
we'll
move
them
elsewhere
to
not
clutter
the
outline
maybe
later,
but
the
way
I
defined.
What
you
just
summarized
was
we
wanted
to
be
portable
with
limits.
You
want
to
be
able
to
lift
and
shift
things,
but
applications
are
still
relatively
tied
to
architectures
yeah.
C
D
D
Because
I
know
but
I,
this
is
coming
from
a
tiny
Edge
perspective
which
we
might
exclude,
but
in
the
tiny
when
you're
thinking
about
interacting
with
the
tiny
Edge
your
far
Edge
applications
are
sometimes
capability
focused
and
the
word
capability
extends
to
those
tiny
iot
devices
around
those
smaller
size
servers.
So
yep
I,
guess
I'll,
add
that,
but
we
can
maybe
toggle
based
on
whether
that's
in
scope,
foreign.
D
B
B
Yeah
not
not
multi-cloud,
but
multi-edge
right.
C
And
then
I
didn't
have
a
full
look
at
the
outlying
or
anything
I
think
I
missed
the
previous
meeting
or
something,
but
certainly
you
have
to
make
the
other.
You
know
assumptions
about.
You
know
you're
fully
distributed,
which
means
that
you
fully
expect
the
network
to
fail
degrade,
and
all
of
that
you
know
that's
that's
something
that
that
will
have
ultimately
even
an
impact
on
on
your
code,
depending
depending
on
the
class
of
application
right.
C
But
if
you're,
if
you
are
expecting
to
be
able,
let's
say
you're
doing
Ai
and
you
have
a
cloud-based
error
function
for
for
outliers
and
things
like
that,
then
you
have
to
bake
in
the
assumption
that
maybe
that
outside
help
won't
be
available,
all
the
time
or
you
know
could
could
not
perform
two
expected
standards.
All
the
time.
D
I
think
maybe
a
nicer
way
to
say
that
is
aware
of
intermittent
connectivity
or
yeah.
D
Maybe
more
of
lower
availability,
because
we
think
of
high
availability
with
the
cloud
and
maybe
it's
lower
availability.
Oh.
C
Variable
availability
could
be
because
that's
the
thing
you
know
you,
you
never
know
what
could
happen.
It
could
be
a
complete
outage.
It
could
be
degradation
in
the
environment
yeah,
for
example,
I'm
I
live
in
a
place
where
we've
got
good
cell
connectivity
most
of
the
time.
But
when
there's
a
power
outage,
everyone
suddenly
Falls
onto
the
cellular
network
and
then
suddenly
latency
goes,
you
know
through
the
roof.
So
that's
one
typical
example:.
D
D
D
This
is
interesting,
so
I
think
for
all
of
this
we're
defining
application
as
like
the
larger
application
or
the
larger
solution,
you're
creating,
which
I
think
is
proper,
because
that's
kind
of
it's
not
really
helpful
to
talk
about
one
part
of
the
edge
that
links
into
some
and
just
in
its
isolation.
But
do
people
have
thoughts
on
this
one,
or
maybe
cases
like
this?
D
This
paragraph,
we
probably
don't
want
to
call
out
a
certain
project
like
we're
trying
to
be
independent
here,
but
maybe
we
could,
since
coming
out
of
the
cncf,
mentioned
several
cncf
projects
that
kind
of
provide
this
solution.
I.
C
Think
it's
for
a
game
to
at
least
provide
examples
and,
and
people
can
can
make
up
their
mind
about
the
specific
projects.
I
can
certainly
provide
the
eclipse
perspective
on
that
one
as
well
and
and
I
think
there,
the
the
one
point
that
I
would
reinforce,
and
that
was
one
of
the
tenets
in
my
in
my
talk
that
that
that
Steve
talked
about
was
that
you
know
the
the
very
first
slide
in
that
talk
was
literally
answering
the
question.
C
Do
you
need
do
you
need
to
bring
kubernetes
if
you're
doing
something
at
the
edge?
And
the
answer
was
maybe
question
mark,
and-
and
this
is
because,
depending
on
what
you
have
in
mind,
it
could
make
sense
to
have
a
control
plane
or
even
you
know,
in
a
field
data
center
or
even
kubernetes,
kubernetes
on
on
actual
Edge
nodes
very
very
far
away
from
from
the
cloud
right,
but
it
it
all
goes
down
to
your
use
case.
C
Your
expectations,
your
requirements,
and
so
you
shouldn't
be
afraid
of
mix
and
matching
in
the
sense
that
not
only
you
need
to
think
this
about
vendor
neutral
pass.
But
you
should
be
blinded
by
a
single
platform.
You
should
pick
the
right
platform
for
the
type
of
environment
and
then
make
sure
that
they
integrate
together.
C
So
in
our
case
at
Eclipse
we
have.
We
have,
for
example,
IO
fog,
which
is
something
you
know
for
container
orchestration
at
the
edge
that
you
would
use
instead
of
let's
say,
k3s
or
cube
Edge
or
something
like
that,
but
we'll
integrate
with
remote
remote
kubernetes,
kubernetes
control,
plane,
for
example.
You
know
for
some
use
cases
it
makes
sense
and
I'm
not
saying
that
this
is
the
architecture
right,
because
it's
my
own
baby
project
or
anything.
D
I
think
that's
a
really
good
point
and
another
point
I
would
add
to
that.
Is
that
kind
of
what
you
always
are
doing
there
in
that
scenario
of
having
kubernetes,
which
was
made
for
the
cloud
exists
in
the
cloud
and
then
use
something
else
on
the
edge
is
one
option.
Another
option
is
those
Cloud
technologies
have
evolved
to
the
edge.
D
Another
option
is
to
take
the
edge
version
of
things
that
have
kind
of
just
been
slimmed
down,
like
K
through
S,
and
then
the
third
option
is
to
take
the
things
that
originally
Cloud
kind
of
modified
for
the
edge
like
Cube
Edge,
which
really
broke
up
kubernetes
architecture.
A
little
so
I
think
this
section
could
kind
of
just
be
a
discussion
around
options.
C
B
Okay,
maybe
Unity
can
mention
in
this
area
that
this
kind
of
approach
would
allow
you
to
evolve
your
big
Edge
application
by
allowing
you
to
add
more
Edge
nodes,
which
are
you
know,
changing
over
time.
So
maybe
you
started
a
bit
with
something:
that's
just
plain
containers
and
then
your
you
can
do
somewhere
a
full.
You
know
three
node
cluster
or
something
like
that
or
add
vasm
in
the
future
or
whatever
right.
Yes,
yes,
you
could
not
thank
yourself
for
the
future.
Evolution
of
the
of
the
platform.
C
You
want,
of
course,
to
do
to
do
devops,
but
you,
you
would
do
let's
say
continuous
integration,
but
not
necessarily
continuous
deployment,
especially
if
you
have
real-time
Mission
critical
applications
running
right.
It's
not
it's
not
the
time
to
patch
all
of
those
smart
cars
stuck
on
the
interstate
at
4,
30
PM,
in
the
middle
of
a
traffic
jam,
for
example,
or
maybe
the
traffic
jam
is
a
good
place
to
patch
them
before
they.
C
They
pick
up
some
speed,
I,
don't
know,
but
I
think
that
connection
to
the
developer
concerns
is
important
in
the
sense
that
you
shouldn't
blindly
assume
that,
because
you
know
how
to
build
Cloud
native
apps
and
and
deploy
them
and
monitor
them,
and
all
of
that
and
your
reliability
engineering,
and
all
of
that
that
you
will
do
exactly
the
same
thing
at
the
edge.
That's
an
important
paradigm
shift
that
you
need
to
take
into
account.
D
D
B
D
Thanks
for
pulling
us
back
into
doing
that,
I
forgot
about
that
I
like
that.
A
lot
I
think
that
kind
of
explains
itself,
but
is
there
anything
that's
kind
of
a
must
have
like
I
mean
we
could
just
mention
here
like
dashboards,
Maybe,
uis,
I,
guess
what
the
management
part
is
bringing
to
it
is
that,
where
you're
centrally
observing
things,
maybe
you
want
to
centrally
be
able
to
take
actions,
but
I
still
think
those
should
be
separated
up.
B
Yeah
to
to
me,
the
the
later
one
is,
is
the
previous
point
right.
So
we
have
some
kind
of
pass
that
that
I
mean
it's
implied
in
into
previous
one,
that
that
we
have
something
that
it
will
centrally
manage
everything
and
now
we're
discussing
that
it
should
manage
different
kind
of
things
and
then
work
with
different
passes
right.
So.
D
So
what
would
our
one
word
for
the
past
one
be?
Should
it
be
a
vendor
neutral
pass,
or
should
it
be
like
management
at
scale,
yeah.
E
Between
the
two,
maybe
management
at
scale,
more
specific
to
the.
What
we're
getting
at
here,
yeah
I,
don't
know
a
better
way
in
which
to
call
out
tender
neutral
pass
is
a
distinct
thing
from
that.
D
E
Management
at
scale
is
more
than
what
and
vendor
neutral
past
is
more
of
the.
How
good
point
foreign.
D
So
moving
on
to
the
next
one
automatic,
instantiation
or
termination
using
declarative,
intent,
okay,
so
to
me
this
sounds
like
being
able
to
say:
I
want
this.
Many
replicas
when
this
happens.
I
want
this
to
happen.
Some
triggers
in
your
system
that
kick
off
automatic
actions
and
then
also
it
says,
application
automate
optimization.
So
to
me
that
sounds
like
your
application
knows
when
to
sleep
or
when
to
go
down,
and
when
you
come
back
up
to
it
sounds
like
there
might
be
a
lot
in
this
one.
C
D
Okay
and
then
I've
added
I,
think
I
I
think
what
could
go
along
with
this
that
I
would
love
to
include
in
the
paper
is
some
sort
of
concept
of
not
always
running
everything
I
know
with
cloud-based
infrastructures.
Sometimes
your
container
is
always
churning,
but
I
I've
heard
over
many
kubecons
a
lot
of
talks
that
are
about
orchestrators
that
can
like
that
can
stop
Services
when
they're
no
longer
need
and
restart
them.
I
think
that's
because
of
the
restrained
resources
on
the
edge
I.
D
Think
that
would
be
something
that's
a
nice
to
have.
Is
the
ability
to
kind
of
turn
things
off
when
you
no
longer
need
them
to
restrict
resource
usage,
but
I,
don't
know
if
that's
a
paradigm.
C
C
But
you
know
there
are
scenarios
where,
maybe
maybe
you
can
take
advantage
of
that
capacity
that
you
have
in
the
field
to
do
something
else,
rather
than
the
main
workload
when
the
main
workload
is
lower,
you
know
longer
term
optimization
problems,
you
could
you
could
run
some
AI
models
to
to
provide
additional
value,
let's
say
you're
doing
video
analytics.
D
I,
like
that,
well
we're
at
time
I
believe
there
is
one
more
which
I'm
not
so.
I
would
love
to
just
do
the
last
one,
but
if
people
need
to
hop
off
totally
understand.
D
C
Be
that
would
be
the
case,
but
in
any
case
I
will
I
will
keep
an
eye
on
the
outline
and
maybe,
if
I
have
other
ideas
put
them
in
and
since
I
wasn't
there
yeah
I'm,
finishing
writing
a
book
and
things
like
that
I
have
so
many
things
to
write.
C
Unfortunately,
I
may
not
be
able
to
to
contribute
directly,
but
in
any
case,
I'm
happy
to
be
involved
in
this
effort
and
provide
feedback
and,
of
course
review
whenever
whenever
it
makes
sense
and
if
by
miracle
I
get
additional
capacity,
maybe
pitch
in
but
I'm
already
in
trouble.
D
A
D
Think
we
can
save
this
last
one
then
for
the
next
meeting
and
it'll
kick
us
off,
and
we
can
call
that
a
great
review
of
what
we
have.
So
that
was
a
good
use
of
time.
I
feel,
like
so
I
appreciate
everyone
kind
of
chipping
into
that
discussion.
Yeah.