►
From YouTube: Kubernetes WG IoT Edge 20200715
Description
July 15 meeting of the Kubernetes IoT Edge Working Group
A
This
meeting
operates
under
the
code
of
conduct
of
the
kubernetes
project.
Summary
of
that
is
just
to
be
nice
to
people.
The
co-chair
of
the
group,
dion
from
red
hat,
told
me
that
he
wouldn't
be
able
to
be
present
today,
but
we'll
do
the
best
we
can
without
him.
A
When
I
looked
this
morning,
no
one
had
put
any
suggestions
on
the
agenda,
but,
as
usual,
any
group
members
attendees
are
welcome
to
put
things
in
at
the
last
minute
or
we've
had
meetings
before
when
nothing
started
out
but
ended
up
getting
some
fascinating
discussions.
A
A
There
are
a
few
people
here
who
are
much
are
regulars,
although
maybe
you've
been
here
once
or
twice
jump
in.
If
you'd
like
to
introduce
yourself.
A
B
Yeah,
my
name's,
the
greg
waynes,
I'm
I
work
in
the
starlight
openstack
sterling
x,
team
and
starlingx
is
a
project
within
openstack
that
deploys.
B
Deploys
a
kubernetes
cluster
as
well
as
the
containerized
openstack
cloud
on
top
of
that
kubernetes
cluster,
and
we,
I
think
I
think
I
might
have
joined
you
when
ildico
joined
as
well.
Okay,
to
talk
about
the
ospf
edge.
B
And
anyway,
if
we
I'm
just
interested
in
this
group,
because
starlingx
has
a
kind
of
distributed
cloud
type
deployment
configuration
where
it
has
essential
cloud
and
geographically
distributed
sub
clouds
and
at
the
edge
and
just
interested
to
see
what
works
going
on
here,
that
you
know
for
perhaps
we
could
leverage.
B
No
and
kubernetes
as
well
so
like
at
our
lowest
level
like
openstack,
is
sort
of
just
an
application
for
us,
so
at
the
base,
where
we
basically
manage
a
kubernetes
on
bare
metal
on
dedicated
servers
and
so
so
yeah.
So
when
I
say
central
cloud,
it's
a
it's
a
full
kubernetes
cluster
like
masters
and
workers
and
then
and
then
the
sub
clouds
also
have
masters
and
workers
so
that
they
can
be
mainly
because
so
that
they
can
be
autonomous
if
they
lose
connectivity
to
the
central
cloud.
A
Okay,
so
it's
like
a
solution
that
it
would
be
fair
to
say
it
federates,
these
other
ones,
by
running
one
instance
of
a
central
controller
and
then
other
independent
ones
under
the
management
of
that.
B
Yes,
exactly
like
this
cloud
is
primarily
used
for
orchestration
purposes
like
or
federation
purposes
across
the
the
sub
clouds
right
now,
the
orchestration
is
is
more
around
being
able
to
do
sort
of
easy
kind
of
one
touch:
subcloud
install
installs
as
well
as
the
you
know,
monitoring
kind
of
alarms
and
collecting
you
know,
kubernetes
events
through
elastic
and
that
sort
of
thing
so
uh-huh.
A
Okay,
great,
I
think
that
is
definitely
something
that
members
of
this
group
would
be
interested
in
yeah.
I
don't
know
if
you,
since
we
had
an
open
agenda,
you're
welcome
if
you're
prepared
to
do
it
to
go
into
a
description
of
it
in
more
detail,
or
maybe
we
can
put
you
on
the
agenda
for
a
future
meeting
if
you'd
rather
have
a
little
more
time,
but
that
sounds
like.
B
Yeah
yeah,
I
wouldn't
mind
getting
on
the
agenda
for
a
future
meeting
and
I
can
yeah
maybe
prepare
something
a
little
better
to
to
go
over,
but
but
yeah
I
like
I'm
like.
Do
you
guys
talk
much
about
q,
fed
or.
A
Well,
you
know,
there's
it's
interesting!
Other
members
are
welcome
to
join
in,
because
I
don't
think
that
there's
a
consensus
on
what
the
recommendation
or
current
state
is
here,
but
I'm
aware
of
projects
that
have
gone
with
a
philosophy
of
doing
a
central,
cloud-hosted
kubernetes
that
manages
cluster
nodes
out
at
edge
locations.
So
it's
essentially
one
one
instance
of
kubernetes
that
has
geographically
scattered
worker
notes.
A
B
Yeah
I
mean
we
had
the
same
discussion
in
like
the
ildico's
team
around
you
know
whether
it's
one
cluster
or
multiple
clusters
and
and
we
arrived
at
the
same
conclusion-
is
that
different
use
cases
will
demand
different
kind
of
architectures.
B
So
so
we
kind
of
promote
both-
I
know
just
within
starlingx
we're
definitely
the
multi-cluster
scenario,
but
it
but
yeah.
I
think
I
actually
attend
the
multi-cluster
c
group
as
well
like
that's
where
they
would
talk
more
in
depth
about
cube
fed.
Is
that
the
understanding.
A
Okay-
I
don't
know
you
know
the
multi-cluster
from
my
perspective,
but
I
don't
regularly
go
to
those
meetings,
but
I
I
maybe
catch
a
few
a
year.
I
think
that
most
of
the
ones
that
I've
caught
are
more
towards
traditional,
large
it
federation.
You
know
like
multiple
data,
centers,
multiple
clusters
in
public
clouds
and
not
so
much
device
edge,
and
these
kind
of
smaller
things
that
historically
have
been
most
of
the
attendees
to
the
meetings
of
this
working
group.
A
The
cluster
federation
or
multi-cluster
maybe
would
be
done
differently
when
your
clusters
are
all
very
large
and
have
a
lot
of
resources
to
the
point
where
you
can
really
take
advantage
of
the
elastic
scalability,
whereas
in
some
edge
deployments
you
might
aspire
to
having
enough
backing
hardware
resource
to
really
claim
your
scalable.
But
the
reality
is
that
there
isn't
much
there
to
let
that
take
place
so
say:
there's
even
a
third
mechanism
to
approach
this,
which
is
out
at
the
edge.
A
D
A
D
D
B
Well,
actually,
the
use
case.
The
use
case
that
we've
mostly
been
working
in
is
like
a
telecom
use
case
where
they're
you
know
trying
to
push
their
cloud
applications
out
closer
to
the
edge.
For
you
know,
all
the
normal
reasons
of
latency
and
for
the
new
style,
apps
they're,
doing
and
stuff
like
that,
and
in
some
of
these
remote
places
are
in
places
where
the
network
connectivity
isn't
super
reliable.
B
So
they
have
the
requirement
that
that
you
know
the
connectivity
between
the
central
cloud
where
they
want
to
do
orchestration
and
management
of.
What's
going
on
on
the
remote
nodes
they
because
the
connectivity
isn't
reliable.
They
they
want
like
a.
They
want
the
edge
nodes
to
be
autonomous
in
the
sense
that
if
they
lose
connectivity
to
the
central
cloud,
they
they
don't
lose
any
functionality.
They
can
still,
you
know
recover
from
container
failures.
They
can
still,
you
know,
have
full
functionality
with
respect
to
their
applications.
B
So
that's
our
kind
of
high
level
context
of
the
use
case,
and-
and
like
I
say
it's,
it's
like
a
it's.
It's
typically
telecom
users
that
are
are
looking
at
the
like.
I
mentioned
me
before
you
came
that
I
work
in
the
starting
x,
openstack
sterling
x
project
and
most
of
the
users
of
that
are
looking
at.
That
project
are
are
looking
for
that
autonomous
autonomous
behavior
at
the
edge,
but
with
centralized
orchestration.
D
Okay,
I
see
so
I
think
so
far
we
saw
two
two
kinds
of
autonomy
or
two
patterns
of
design
patterns
of
autonomy
was
like.
You
can
run
a
kubernetes
cluster
on
edge
and
achieving
autonomy.
D
The
second
one
is
the
pattern
like
a
cube
edge,
designed
where,
like
you,
can
run
your
edge
node,
a
node
on
edge
by
caching,
the
metadata
on
edge.
You
achieve
an
autonomy
as
well.
So
the
reason
I'm
asking
the
multi-cluster
support
is
like.
Are
you
running
multiple
clusters
on
the
edge
so
that
you
want
to
have
a
control
plane
in
the
cloud.
B
Yeah,
like
we,
we
rightly
or
wrongly,
you
know.
B
Two
or
three
years
ago,
we
decided
to
go
down
the
path
of
running
a
full
cluster
at
the
edge,
so
full
control,
plane
and,
and
one
advantage
starling
x
has-
is
that
we
can
scale
pretty
small
right
like
we
have
all-in-one
nodes
that
you
know
run
both
master
and
worker
functionality
and-
and
we
can
sit
on
a
small
like
super
micro,
single
socket
to
eight
core
system
and
and
have
a
decent
number
of
containers
running
on
on
that
and
not
use
much
and
not
use
much
like
other
than
like
one
core
for
the
platform,
so
leave
seven
cores
for
for
the
hosted
applications
so
so
that
that
was
kind
of
one.
B
Behind
you
know,
you
know
we
went
down
the
okay,
we'll
do
we'll
do
full
control
playing
full
clusters
at
the
edge
and
they
might
be
really
small.
They
could
grow
big.
Our
current
users
are
using
tons
of
really
small
sub
clouds.
D
Okay,
I
see
so
so
you're
saying
you
have
several
small
kubernetes
clusters
on
the
edge
and
you
want
to
centrally
manage
them
from
the
cloud
right.
D
D
D
Okay,
I
know
like
nowadays
you
still
support
cross
cluster
communication
for
service
mesh.
That
shouldn't
be
a
problem,
so
if
you
don't
need
a
central
control
plane,
actually
you
can
use.
Leverage
is
still
to
enable,
like
a
cross
cluster
communication
on
the
edge.
D
B
Yeah,
I
mean
that's
an
option,
I
know
just
generally
operationally.
It
seems
nice
to
have
the
central
cloud,
because
all
the
installs
of
the
sub
clouds
are
done
from
the
central
cloud.
It's
kind
of
a
you
know,
single
point,
single
pane
of
glass,
for
you
know
seeing
an
overview
of
you
know
the
state
of
all
the
sub
clouds
that
are
dispersed.
D
I
think
the
overhead,
the
control
plane
overhead
by
having
a
multi-cluster
like
a
cloud
federation
control
plane,
is
the
overhead.
The
second
is
about
the
synchronization
because,
as
you
you're
saying,
the
network
reliability
issue
between
the
edge
control
plane
this
the
central
control
plane
there
can
be
like
synchronization
problem.
D
How
would
you
ensure
you
yeah
that
that
can
that
needs
to
be
addressed?
Yeah.
E
D
So
I
I
think
that
could
be
a
one
of
the
reason
why
steve
was
saying
for
the
multi-clustering
data
center.
You
could
have
multiple
clusters
assuming
the
network.
Reliability
is
still
there,
but
and
then
you
can
manage
multiple
clusters
here.
D
You
have
multiple
clusters
and
then
some
of
them
can
be
connect
disconnected
great.
Then
how
can
you
ensure
your
federation
in
the
cloud
is
still
reasonable
or
like
because
you
do
scheduling
the
the
resources?
The
the
workload
status
may
not
be
true
right.
B
Right,
so
what
are
the
like?
I
actually
maybe
I
was
mixing
up.
The
two
groups
of
you
know
the
multi-cluster
sig
group
and
york,
this
group
d,
which
is
more
iot
and
edge
like
what
are
what
are
typically,
the
type
of
topics
that
that
your
group
is
looking
at.
With
respect
to
you
know,
kubernetes
at
the
edge.
A
Well
I'll
jump
here
and
answer
it,
I
I
think
that
it
it's
tough
to
pigeonhole,
because
we've
even
done
polls
of
the
people
attending
this
group,
both
in
meetings
here
and
at
the
physical
meetings
at
kubecon
conferences
and
they
divide
into
use
cases
from
telco
device,
iot
retail
iot,
which
would
be
standing
up
things
to
host
containerized
workloads
in
retail
stores
that
could
range
from
sandwich
shops
up
to
very
large
stores.
That
might
have
a
rack
of
equipment,
and
it's
so
varied
that
it's
really
tough
to
classify.
A
B
Yeah,
I
think
I
I
think
you
know
ignoring
the
distributed
cloud
stuff
for
now,
we're
in
starting
x
we're
interested
in
a
lot
of
those
same
topics
just
because
our
sub
clouds
are
at
the
edge
right
that
we're
we're
very
interested
in
you
know
deploying
kubernetes
at
the
edge
for
managing
kubernetes
things
or
or
like
kubernetes
devices
like
a
like
a
small,
raspberry
pi,
or
something
like
that,
like
we
did
a
demo
about
a
year
ago,
where
we,
you
know,
just
used
our
starling
x,
all
in
one
server
on
a
little
super
micro,
and
it
was
you
know,
managing
a
number
of
kubernetes,
a
number
of
raspberry
pi's
running
as
simple
kubernetes
and
and
just
as
a
okay.
B
If
I,
if
I
had
devices
that
were
a
little
bigger
and
could
run
kubernetes
like
a
raspberry
pi,
then
I
could
tie
them
into
my
cloud.
My
starlingx
cloud,
which
has
you
know,
services
like
you,
know,
you've
got
a
registry.
It's
got
a
step,
storage
back-end
for
pvcs
and
stuff
like
that.
So
it's
got
a
lot
of
services
that
can
offer
these
small
communities
devices,
but
at
the
same
time
I
we
also
are
interested
in
in
what
services
we
can
provide
to
non-kubernetes
devices
from
you
know
the
small
kubernetes
sub-cloud.
C
A
I
do
encourage
you
if
you
want
to
give
a
prepared
presentation
overview
of
starlingx
and
how
it
would
relate
to
kubernetes
at
edge.
Please
do
at
a
the
future
meeting
of
your
choice.
We've
got
two
different
time
series
going
on,
so
this
one
repeats
every
four
weeks
for
north
america
and
there's
one
at
an
alternate
time
for
eastern
europe,
asia
and
that
repeats
every
four
weeks,
but
between
the
two
of
them
we
have
a
meeting
every
two
weeks,
so
you're
welcome
to
choose
either
one
of
those
and
put
yourself
on
the
agenda.
A
Okay,
so
with
that
said,
does
anybody
else
in
the
group
have
any
parting
comments
or
observations
they
want
to
make?
With
regard
to
the
topic
greg
brought
up.
F
I
would
just
say
hey:
this:
is
this
is
kilton
here.
I
would
just
say
that
thematically,
it's
very
similar
to
what
just
about
you
know.
Every
architecture
needs
to
be
everywhere.
The
approach
is
the
thing
that
depends
on
you
know
what
your
back
end
system
is
and
so
on.
A
Yeah
excellent
yeah
and
then
the
other
thing
I'd
suggest
if
you
can
put
this
in
your
deck,
and
maybe
this
even
segues,
I
noticed
somebody
put
on
the
agenda
opinions
about
project
eve
which,
if
I
understand
it
correctly,
would
be
something
that
goes
down
more
towards
the
hardware
layer.
But
if
you
could
address
on
your
starling
x
what
the
actual
process
and
day-to-day
management
and
maintenance
experience
would
be
with
getting
startling
x
actually
deployed
out
at
edge
locations.
If
that's
what
you
propose
that
it's
suited
for,
but.
B
A
B
Yeah,
okay
for
sure,
like
we've,
actually
done
some
really
cool
recent
work
on
you
know:
leveraging
redfish,
virtual
media
controllers
and
really
got
a
one
button.
One
button
way
to
fully
install
and
deploy
a
subclone.
A
E
Oh
yeah,
that's
the
due
diligence.
Now,
it's
that's
for
kobe
edge
application
for
incubation
is
now
in
the
public
comment
phase,
so
yeah.
So
if
you
are
interested,
could
you
just
go
there
and
take
a
look
if
you
have
questions
or
just
show
the
support?
Yep,
that's
about
it!
Thank
you.
A
E
Yeah,
it's
entered
sandbox
when
cindy
is
still
here
it's
in
march
2019.
The
thing
is,
I,
I
don't
think
that's
a
distribution,
it's
just
extension,
okay,
so
we
we
are
not
going
to
replace
kubernetes
just
extend
the
kubernetes
ability
to
the
edge,
so
a
little
correction
yeah.
I
will
put
the
github
pull
request
there
too.
So,
if
you're
interested,
you
can
take
a
look.
A
Yeah
and
then
what's
what
this
is
about
is
the
cncf
has
a
process
whereby
they
adopt
projects
at
different
life
cycle
phases.
So
many
projects
go
in
in
at
the
sandbox
level,
and
you
know
it
implies.
A
You
know
that
they're
very
early
stage,
the
cncf
itself,
has
a
description
of
the
different
categories,
but
projects
typically
aspire
to
move
up
as
they
build
a
contributor
base,
so
things
that
the
sandbox
can
be
accepted
when
they
are
just
getting
started.
Don't
have
huge
numbers
of
contributors
or
users,
but
still
want
to
get
a
foundation
hosting
them
to
try
to
build
that
kind
of
momentum
and
there's
a
procedure
whereby
you
can
document
the
health
and
the
community
health
of
your
project
to
move
up
into
higher
stages
of
support
within
the
cncf.
A
Foundation,
so
cubed
is
applying
to
do
that,
so
people
familiar
with
it
are
welcome
to
you
know
to
actually
do
the
graduation,
it's
the
tlc
committee
in
the
cnc
who
has
to
vote
and
approve
that,
but
they
do
solicit
feedback
from
members
out
in
the
community.
So
people
who
have
an
opinion
on
this
are
are
welcome
to
make
public
that
opinion
to
help
guide
the
toc.
A
A
Okay:
okay,
any
any
other
comments
on
cube
edge
before
I
move
on
to
the
next
item
in
the
agenda.
A
Okay,
next,
I'm
not
sure
who
put
it
in
the
agenda,
but
somebody
put
a
line
wanting
to
have
wanting
to
know
about
opinions
about
project
eve,
and
whoever
did
that?
Maybe
you
can.
A
C
Yeah,
well
that
that
would
be
me
okay,
so
for
those
who
are
not
necessarily
familiar
with
the
group,
hello,
I'm
frederick,
I'm
working
at
the
eclipse
foundation
and
I
manage
iot
and
edge
computing
programs
over
there.
And,
if
you
didn't
know
we
have
well.
This
working
group
is
a
is
a
joint
initiative
between
the
cncf
and
the
eclipse
foundation.
C
So
we
are
glad
to
collaborate
on
that
and
we
have
our
own
edge
computing
working
group,
which
is
centered
on
on
the
two
edge
computing
platforms,
that
we've
got
the
eclipse,
io
fog
and
eclipse
fog
os,
and
as
part
of
that,
we're
trying
to
find
our
place
in
the
world
currently
so
to
speak
and
having
a
look
at
various
things
that
exist
in
the
market.
To
try
to
contrast
and
compare
that
to
the
platforms
that
we
have
in
the
direction
that
we
want
to
have
over
there.
C
C
The
the
one
thing
that
that
struck
me
when
I,
when
I
read
you
know
what's
on
their
website,
is
essentially
that
they
say
that
there
is
a
kind
of
open
source
reference
controller
for
the
platform
that
implements
you
know
the
apis,
but
that
the
production
quality
ones
so
to
speak
would
be
found
into
into
commercial
offerings
or
offerings
that
are
not
necessarily
part
of
the
open
source
core,
and
I
wanted
to
have
some
some
perspective
about
that
that
project
its
level
of
maturity.
What
what
you
think
of
it?
C
If
you
ever
kick
the
tires
on
it
or
not,
I
I
don't
have,
unfortunately,
the
time
to
try
it
myself.
So
I
was
hoping
that
maybe
someone
on
the
call
has
some
perspective
about
it,
and,
if
not,
that
that's
fine,
you
know
we'll
we'll
eventually
find
a
way
to
to
evaluate
it
but
yeah.
I
was
curious
about
your
your
collective
perspective
on
that
particular
project.
F
I
haven't,
I
haven't
used
it
yet.
I
have
been
following
a
little
bit
since
the
project
was
announced
some
time
ago
and
last
I
had
a
look
at
the
the
code.
It.
It
seemed
that
it
was
an
edge
virtualization
linux
distro,
but
I'm
not
sure
if
it
now
is,
is
installable
on
top
of
your
own
os
or
it
or
it
is
the
os
still,
and
so
I
guess
the
short
answer
is
so
steve.
I
have,
I
have
not
used
it,
and
so
I
don't.
C
A
Yeah
my
reaction,
it
possibly
it's
wrong,
but
I
I
came
across
something
I
I
don't.
I
think
I
saw
it
at
open
source
summit
or
something
and
saw
a
session
on
it
and
went
and
had
never
hadn't
heard
of
it.
So
I
went
and
looked
it
up
the
description
on
the
web
page.
A
Let
me
let
me
post
what
I
looked
at
in
the
chat,
but
I
gathered
that
this
is
a
a
hypervisor
plus
more
and
that
that
hypervisor,
if
I
read
it
correctly,
is
based
on
zen
and
it
has
verbiage
on
this
webpage
saying
that
they
are
interacting
with
hardware
root
of
trust
like
tpm.
A
A
Up
to
os's
and
then
maybe
even
containerized
applications
from
there.
C
A
What
provoked
me
to
actually
even
make
an
attempt
to
look
into
what
this
was.
A
A
Yeah
and
then
you
know
it
it,
it
strikes
me
that
this
plan
of
going
all
the
way
from
hardware
up
to
containerized
apps
is
a
pretty
big
undertaking.
A
Yeah,
so
I
I
think
that
you're
going
to
end
up
with
a
wide
variety
of
components
there
and
I
think
the
bigger
you
get
the
the
more
that
you
really
need
to
put
in
place
a
structure
to
allow
a
plug-in
architecture
where
things
could
be
swapped
out.
You
know
like
if
you're
going
to
support
going
from
hardware
to
a
hypervisor
to
an
os
to
containers.
A
I
would
contend
that
I'd
rather
see
something
where
the
hypervisors
were
plugged
in.
So
you
could
choose
your
hypervisor
where
the
os's
were
plugged
in.
So
it
isn't
just
you
know
they
choose
the
operating
system
for
you
when
you're
stuck
with
it
and
then
finally,
the
container
runtime
should
be
pluggable
and
that
it's
nice
to
say
that
all
these
abstraction
points
are
nice,
but
as
you
do
that
you've
taken
on
a
a
pretty
big
project
at
that
point,
oh
yeah,
believe
me,
I'm
I'm.
C
A
So
when
you
get
that
big,
then
I
become
pretty
skeptical,
because
there
are
so
many
things
that
any
one
of
which
can
go
wrong
and
your
platform
isn't
viable
so
that
historically,
I
think
the
people
have
managed
to
pull
that
off,
but
it
usually
ends
up
being
something
where
there's
a
de
facto
commercially
funded
implementation
of
this
that
makes
money
to
fund
all
the
r
d.
That's
required
that
maybe
is
published
in
an
open
source
form,
and
maybe
that's
what
this
is.
A
But
you
know
that
it
just
strikes
me
that
for
this
to
be
successful,
there's
a
lot
of
moving
parts
here
it
would
take
a
very
large
community
with
to
be
successful.
I
think
at
this
point,
they'd
need
to
somehow
enlist
multiple
organizations
getting
behind
this
it
it's
too
big
to
be
successfully
driven
by
just
one
backer.
I
think
yeah
and.
C
And
when
you
drill
down
a
bit
more
in
their
website,
they
have
a
page
that
references
compatible
hardware
and
then
implementations,
but
right
now
it
references
just
the
open
source
controller
which
it
seems:
it's
not
production
quality
so
to
speak,
and
then
there's
one
commercial
implementation.
They
stood
by
by
a
company
called
zidada
that
was
zida
zida
ever
heard
of
those
guys,
because
that's
not
a
name.
I
recognize
either.
But
then
I'm
not
yeah.
F
A
C
Looks
like
it's
not
the
case,
but
that's
that's
perfectly
fine.
This
in
one
way,
this
matches
my
perception
in
the
sense
that,
yes
to
me,
it
appeared
a
huge
undertaking,
one
and
and
two
it
didn't
seem
to
have
for
the
time
being
wide
adoption
or
or
even
brand
awareness
in
the
ecosystem,
so
to
speak.
So
that's
in
line
with
my
expectation
but
yeah.
I
I
tried.
I
tried
my
luck
here
and
I
don't
want
to
eat
up
the
rest
of
the
meeting
talking
about
that.
A
Yeah,
here's
the
I
I'm
going
to
put
another
thing
in
this
slide
slack
where,
which
is
the
basis
for
my
conclusion
that
it
was
it's
perhaps
xeno,
aligned
in
a
confession
of
possible
conflict
of
interest.
For
those
who
don't
know,
I'm
employed
by
vmware,
so
I
being
employed
by
vmware,
I
might
favor
a
project
that
would
allow
hypervisor
plug-ins,
for
example,
I'm
not.
A
Maybe
I'm
not
company
aligned
enough
that
I
demand
that
the
only
hypervisor
be
a
vmware
one,
but
that's
just
not
realistic
these
days,
but
likewise,
I
don't
think
that
my
personal
opinion
is
that
these
big
stacks,
taking
on
things
where
the
hypervisor
is
a
component,
should
probably
be
pluggable
to
take
multiple
implementations
and
in
furtherance
of
that
position
I
think
these
days
you
need
to
at
least
swap
out
hypervisors
to
support
both
x86
and
arm.
A
C
C
E
Yeah
I
have
to
work
for
bosch
for
a
long
time,
so
pretty
happy
inside.
I
even
write
a
few
bosch
cpi
and
the
releases
oh
okay,
hard
and
how
long
it
takes.
But
I
don't
think
that
the
new
attendant
use
that
right
steve
that
did
the
new
tendu.
It's
not
using
bosch
right.
A
C
A
C
A
We
can
go
on
with
with
the
next
topic,
so
I'll
tell
you
what
for
eve,
if
you're
interested
in
taking
this
further,
it
seems
like
what
we've
got
are
our
you
know.
None
of
us
really
are
authorities
on
eve.
It
could
be
the
story
of
hey
in
the
the
land
of
the
blind.
The
one-eyed
person
is
king,
but
maybe
we
I
should
be
able
to
reach
out
to
somebody
from
project
eve
and
invite
them
to
come
out
and
give
a
presentation.
C
Yeah
to
me
to
me
at
least
it's
it's
certainly
of
interest,
but
if
that's
just
me,
then.
A
F
Yeah
I'd
love
to
have
from
the
horse's
mouth
here.
What's:
okay,
great.
A
I'll
I'll
take
an
action
item,
I'm
not
promising.
It
will
be
in
the
next
meeting
or
even
two
but
I'll
try
to
track
down
somebody
from
the
project
and
see
if
they
want
to
come
on
and
give
a
presentation
at
a
future
meeting,
it's
better
to
get
the
authoritative
story
from
them
rather
than
a
bunch
of
people
speculating
yep,
I'm
kind
of
skeptical.
You
know
you
run
across.
A
You
know
and
they
go
through
this.
We
we're
going
to
support
containerization
and
when
you
drill
down
and
look
at
it,
there's
a
lot
more
aspirational
things
than
real
in
a
lot
of
these
projects
that
I've
taken
a
look
at
so
it's
I've,
I've
gotten
a
little
skeptical
on
it
myself.
I
don't
know
if
anybody.
F
F
I
feel
I
feel
exactly
yeah
sorry
go
ahead.
Frederick
yeah.
C
F
Is
I
mean
so
edge
is
the
new
cloud
right
and,
if
you
remember
back
in
the
early
days
of
cloud
before,
anyone
really
had
a
clear
definition
of
what
differentiated
cloud
system
from
your
own
data
center
that
you
know
that
you
tech
cloud
onto
everything
and
what
we've?
What
we've
got
right
now
is,
you
know
well
over
at
the
eclipse
foundation.
F
The
reason
we
called
our
working
group,
the
edge
native
working
group
is,
is
we're
really
trying
to
redesign
things
from
the
edge
in
for
that
reason,
because
if
you,
if
you
tack
edge
onto
things
that
previously
were
designed,
for
you
know
high
speed,
interconnected
data
center
systems,
you
probably
will
be
able
to
make
the
functionality
go.
But
is
it
really
designed
for
the
environment?
Or
is
it
something
that
you've
added
on
there's
no
problem
to
add
on
stuff
and
start
iterating?
F
But
what
I'd
consider
to
be
a
kind
of
an
edge-ready
system
is
one
that
really
has
re-thought
the
architecture
from
the
edge
in
and
then
merges
with
the
other
stuff,
and
it
doesn't
matter
if
you're,
extending
or
you
know
whatever
all
of
these
are
viable
architectures.
You
could
have
an
edge-oriented
control
plane,
that's
separate
as
you
do
with
eclipse
io
fog,
it's
all
fine,
but
really
thinking
through
what
what
your
environment
is
like
takes
time
and
and
so
there's
a
lot
of
stuff.
F
That
is
yeah
first
first
blush
edge
additions
and
I
think
on
deployment.
Usually
they
find
that
they
need
to
go
back
and
rethink
how
some
of
the
things
work.
So
the
primitives
are
just
very
different.
Yeah.
A
Well
that
you
planted
an
interesting
seed
in
my
head
kilton
with
that
remark,
comparing
it
to
cloud
where
I,
I
think,
you're
spot
on
there,
where
I
think
back
of
the
early
days
of
cloud
and
every
player
in
it,
whether
they
were
hardware
services.
Software
wanted
to
tech
cloud
on
in
some
way,
because
maybe
wall
street
would
even
reward
you
for
being
associated
with
it.
Yeah.
A
The
reality
is,
it
came
down
actually
surprisingly
quickly
to
like
many
things
in
the
business
world.
If
you're
not
in
the
top
three
in
market
share,
you
you
don't
make
money
now
granted.
A
This
is
an
open
source
group,
so
maybe
money
isn't
the
focus,
but
I
think
sort
of
the
same
thing
applies
in
terms
of
building
a
critical
mass
of
a
community
where
you
can't
have
12
successful,
open
source
projects
all
getting
enough
resource
to
be
doing
well,
it's
going
it
typically
kind
of
aligns
down
to
a
top
three
or
something
that
managed
to
get
enough
community
participation
to
make
their
projects
healthy
and
viable,
and
a
lot
of
this
edge
stuff
is
everything
out
there,
including
a
lot
of
legacy.
C
F
F
It
is
challenging
and
it's
hard
to
make
sense
of
what
is
real
and
what
is
hype
and
what
is
future
focused.
It's
like
evolution
right.
You
have
a
this,
is
the
campaign
explosion
of
species
and
actually
they
compete
each
other
away
and
the
you
know
the
ones
that
are
well
suited
for
their
particular
use
cases
gain
a
stronghold
right
and
then
they
have
a
flourishing
population.
So
yep
right.
A
Now
I
don't
propose
this
group
or
any
other
group
should
appoint
themselves
king
maker,
it's
more
like
you
want
to
give
a
forum
to
expose
these
different
projects
and
ideas,
and
let
the
best
ones
win
so
happy
to
sit
here
discussing
these
things
and
give
presentations
and
kind
of
let
the
user
base
sort
out
natural
evolution.
A
Okay,
well,
in
that
you
know
with
that
philosophy,
then
I'll,
I
will
go
try
to
recruit
somebody
from
eve
to
give
a
presentation
and
will
allow
each
observer
to
make
draw
their
own
conclusions
as
to
this
make
sense.
Thank
you
so
much
for
this.
A
A
A
A
Anybody
else
have
anything
they
want
to
bring
up
or
discuss.
Will
we
wait
to
see
if
tina
will
come
back
to
talk
about
a.
A
A
So
I'll
I'll
put
something
out
there,
fishing
for
ideas
from
members
cindy
and
I
are
destined
to
give
a
talk
at
the
online
version
of
kubecon
europe
on
running
applications
at
the
edge,
and
it
would
be
helpful
to
us
if
any
of
the
rest
of
you
have
ideas
for
things
that
you
might
like
to
see
covered
in
such
a
talk.
A
I'm
thinking
that
you
know
the
topic
of
apps.
That
edge
is
pretty
broad
anything
from
you
know.
In
theory,
you
can
run
whatever
you
can
put
in
a
container
if
kubernetes
is
involved,
but
potentially
we
could
go
into
things
like
function
as
a
service
or
other
things.
So,
just
throwing
this
out
there
for
comments.
F
Yeah
well
steve
previously,
you
have.
You
have
said
that
you
would
be
interested
in
doing
a
little
bit
of
digging
into
what
you
know.
Land
does
and
and
serverless
function
frameworks
would
be
viable
for
use
on
lower
lower
power
hardware,
and
we
we
of
course
there's
a
bazillion
things
out
there
that
we
could
constantly
be
researching
and
talk
about.
So
I
don't
think
we
ever
got
to
do
that
in
this
working
group.
D
But
I
I
think,
kyuton,
you
brought
a
good
good
topic.
We
can
definitely
shares
the
learnings
or
what
we
can
see
in
the
open
source
community.
I
think,
as
you
know,
k
native
is
a
way
for
people
to
run
serverless,
potentially
on
the
edge
as
well,
and
then
I've
seen
some
other
ways.
People
create
their
scheduler
or
dispatcher
on
the
edge
yeah.
We
can,
as
usual,
for
the
previous
coupon
talk.
We
can
definitely
bring
in
this
topic
and
share
what's
available
and
people
can
pick
and
choose
cool.
F
A
Yeah,
the
other
philosophy
is
whether
you
actually
you
know
you
could
take
the
approach
of
kind
of
running
the
lambda
out
of
the
edge
device
or
viewing
the
edge
device
as
kind
of
a
generator
of
event,
driven
data
streams
that
perhaps
would
feed
into
a
lambda
up
at
a
gateway
node.
Just
one
tier
above,
I
think,
there's
probably
room
for
both
of
those
to
be
of
interest
to
some
use.
A
Cases
makes
sense,
yeah
yeah,
even
even
the
act
of
putting
the
infrastructure
in
place
to
enable
these
event-driven
streams,
whether
it
be
mqtt
based
or
kafka,
based
or
there's.
I
think
there
are
other
possible
solutions.
There
is
a
pretty
interesting
topic
that
you
could
spend
a
lot
of
time
on.
D
By
the
way,
I'm
curious,
like
steve
you
mentioned
about
some
people,
use
mqtt,
some
use
a
kafka.
I'd
like
to
hear
your
thoughts
like
in
general.
What
what
up
hobby
bars,
seeing
those
two
and
even
kafka
can
support
like
if
you
build
a
connector,
you
can
support
the
general
mqtt
protocol
as
well
right.
A
Yeah,
I
think
some
of
the
challenges
are
land.
You
know
to
feed
it
into
one
of
these
kind
of
standardized,
standardized
event,
flows
or
time
series
data,
you've,
you're
generally
going
to
have
to
run
load
some
kind
of
a
app
down
at
the
origination
point,
and
I
think
some
of
these
vary
in
how
resource
intensive
landing
that
initial
publication
point
is
so
that
could
be.
A
There
might
be
a
trade-off
there,
where
some
of
these
platforms
give
you
more
features,
but
at
the
cost
of
more
resource
consumption,
some
of
them
may
or
may
not
be
as
tolerant
as
others
with
regard
to
intermittent
network
connectivity
and
performance
too.
So
you
know
it's
interesting.
I
I
think
that
that's
one
of
the
reasons
why
I
think
mqtt
is
sometimes
favored
or
even
message.
Queuing
protocols
like
I
know,
rabbitmq
or
things
in
that
family
can
be
favored
because
of
the
ability
to
cue
things
when
network
connectivity
is
intermittent.
F
There's
also
I've
been
doing
a
lot
of
work
recently
with
very
large
message:
payloads
ie,
like
video
and
image
frames,
which
are
not
really
well
suited
to
anything
except
the
lowest
overhead
possible
and
no
any
any
pub
sub
system,
while
beautiful
needs
to
probably
step
aside
in
that
scenario,
because
the
video
frames
at
60
frames
a
second
are
very
difficult
to
to
queue
and
deliver.
F
So
you
should
really
just
allow
someone
to
tap
them
so
which
you
know,
there's
just
there's
a
mix
of
use
cases
and
there's
a
right
architecture.
For
you
know
twenty
percent-
and
you
know
another
right
architecture
for
30
and
the
the
question
is,
is:
should
they
all
be
one
thing?
Should
they
all
be
orchestratable
in
the
same
way?
Maybe
maybe
not
right,
and
I
don't
think
that
it's
right
to
say
that
there's
an
answer
at
this
stage
of
edge
computing.
But
what
there
should
be
is
a
lot
of
exploration.
A
Right-
and
I
think,
there's
my
days
in
device
iot
at
wonderware-
bring
me
back
to
your
point
of
things
being
different.
The
two
use
cases
of
sensor,
readings,
iot
and
video-
are
such
that.
I
believe
that
the
common
ask,
then
isn't
that
these
things
be
queued
indefinitely
you'd,
rather
throw
it
away
and
get
the
fresher
data.
You
know
you
I
care
about
what
the
temperature
of
this
vessel
is
now
not
what
it
was
five
minutes
ago,
if
my
connection
got
interrupted.
A
F
That's
right,
in
fact:
it's
it's
it's
it's
so
useless
that
you
can
drop
it
on
the
floor
after
a
certain
amount
of
time,
because
your
the
actions
you
would
take
are
not
actions
that
would
produce
good
results,
so
you'd
rather
have
no
date
than
video.
That's
too
old
or
sensor.
Reading
zero.
F
A
You
know
doing
retries
to
the
point
where,
when
things
resolved,
you
were
getting
stale
data
rather
than
what
you
really
cared
about.
You
know
that's
a
much
lower
level
of
the
protocol
stack
and
some
of
these
message.
Things
are
even
a
layer
above
that,
but
if
they
ride
on
tcp,
inherently
even
at
the
layer
below
you've
got
potential
issues.
C
Yeah
most
most
of
the
protocols
we
that
are
widely
used
right
now
are
are
rather
dependent
on
udp
or
something
that
would
be
udp-like
in
in
nature.
A
Yeah
and
the
specific
application
I
was
working
on
were
control
panels
for
human
operators
in
factories
and
process
plants
where
they
would
be.
You
know
at
these
big
consoles
controlling
a
factory
a
pipeline
and
if
there's
disruptions
in
the
connectivity
to
their
sensors,
you
really
don't
want
five-minute
old
data.
Oh
gosh,
especially
if
it
you
know,
initially,
causes
an
operator
to
think
that
it's
the
current
state
of
things
when
that
isn't
reality
horrible
things
can
happen
with
that,
and
you
know
it.
A
Those
were
early
days
of
resolving
some
of
these
issues,
but
the
problem
is
still
there
and
I
think
that
there
are
circumstances
where,
with
I
don't
know,
financial
transactions
or
something
where
you'd
be
really
concerned
about
that,
even
like
smart
cities,
where
somebody
feeds
a
credit
card
number
into
a
parking
meter,
you
don't
want
that
to
be
lost,
so
queuing
would
be
favored
for
that,
yet
it's
still
an
edge
application,
but
certain
other
things.
F
And
to
add
one
more
layer
to
it,
I
know
that
we're
out
of
time.
But
now,
if
you
are
going
to
drop
stuff
on
the
floor,
do
you
need
a
lightweight
audit
trail
to
tell
that
you've
dropped
data
on
the
floor
until
the
one
that
you
delivered
and
if
you
don't
drop
data
on
the
floor,
do
you
need
an
audit
trail
saying
that
it
has
in
fact
delivered
all
of
the
messages?
F
The
main
reason
that
I
bring
it
up
is,
if
you
can
imagine
doing
some
artificial
intelligence
or
data
analytics
on
some
sensor
or
video
data
that
arrived,
and
it
was
in
fact
five
minutes
old,
but
it
was
you
know
there
was
some
flaw
in
the
system.
You
would
want
that
recorded
to
know
that
you
issued
a
parking
ticket
for
someone
based
on
old
information
that
arrived
five
minutes
late.
So
that
way,
at
least
when
you
audit,
you
know
you're,
not
saying
well,
that's
what
the
system
reported
right.
F
The
more
autonomous
we
try
to
make
the
world
the
more
record-keeping.
We
probably
need
to
do
to
be
able
to
shore
up
unfairness
and
things
like
that.
It's
a
whole
other
deep
topic
of
you
know
distributed
audit
trail.
D
One
thing
I
want
to
mention
is
about
the
survey
we
list
out
a
survey
question
for
the
kukang
europe
in
august,
so
we
like
to
like
submit
the
survey
soon,
so
please
go
there.
I
share
the
link
in
our
slack
would
like
to
hear
your
thoughts
and
put
your
inputs.
There.
A
A
First,
one
yeah
and
I
apologize
to
tina
if
we
were
supposed
to
have
you
on
the
agenda
for
a
cranial.
We
don't
have
time
to
this
time,
but
we'll
roll
that
into
a
future
meeting.
A
You
can,
let
me
know
whether
you'd
prefer
the
eastern
europe
asia
time
cycle
or
this
one
either
one
works:
okay,
okay,
I'll
put
you
in
the
next
meeting
in
that's
in
a
couple
of
weeks,
yeah
sure
all
right:
okay,
thanks
everybody
for
attending
and
there's
a
two
week,
two
weeks
from
now,
there's
a
meeting
on
the
asia
time
cycle
four
weeks
from
now
on
this
at
this
time.
Thank
you.