►
From YouTube: Kubernetes WG IoT Edge 20180914
Description
September 14 2018 WG meeting presentation on KubeEdge by Cindy Xing followed by Q&A and discussion
B
Okay,
so
today
I
I'm
I'd
like
to
talk
about
edge
project
called
the
cougar
edge.
We
worked
on
at
huawei,
so
for
this
project
we
have
two
topics
accepted
by
kubecon
at
shanghai
this
november.
B
B
B
B
They
can
enable
a
lot
of
things,
as
I
mentioned,
for
the
first
scenario,
the
car
and
the
driver
can
be
recognized
automatically
and
with
actions
taken
and
then
for
the
second
for
the
parking
lot
or
the
gas
station
the
cars
their
locations.
All
the
data
can
be
collected
analyzed
so
that
a
lot
of
things
are
guidance
guidance.
A
simple
guidance
board
can
be
shared
to
the
audience,
and
then
you
enable
a
lot
of
efficient
and
then
drill
down
all
the
tools
scenarios
we
found
out.
B
No,
the
below
functionalities
like
we
need
to
as
a
user,
I
need
to
register
or
configure
the
edge
or
the
things
at
the
cloud,
and
then
a
user
need
to
configure,
deploy
the
image,
recognition,
ai
model
or
server
list
functions
from
a
cloud
to
edge.
B
B
So
here
is
a
real
customer
scenario.
We
worked
with
a
customer
in
china,
it's
about
a
water
tank
system
in
a
city,
okay,
as
you
can
see,
there
are
abc
three
water
tanks,
geo
located
at
different
locations
in
the
city
and
then
for
each
water
tank.
There
is
the
edge
node,
the
sensors
and
valves
are
installed
among
other
edges.
B
They
are
aware
of
each
other,
so
they
can
from
a
network
or
perspective.
They
can
communicate
with
each
other,
for
example,
water
tank
a
can
broadcast
its
water
level
to
the
other
water
tanks
as
a
whole.
The
system
can
intelligently
decide
how
much
and
the
valves
need
to
be
open
or
closed
so
that
the
whole
system
is
efficient
and
prevent
like
a
water
hammer,
those
kind
of
things.
B
So
some
details
is
like
they
need
network
communication
among
the
water
tanks.
The
whole
system
is
decentralized
in
the
sense
in
case
there's
no
network
connection
between
edge
to
the
cloud
the
system
can
autonomously
work
effectively
and
local
quick
in
case
of,
for
example,
I
mentioned
the
water
hammer
scenario.
B
Actions
can
be
taken
really
quick
and
reliably,
so
this
is
and
on
the
other
hand,
from
a
cloud
perspective
adam
for
the
whole
water
tank
system.
They
can
monitor
the
health
for
the
whole
system.
So
this
is
another
scenario
next,
so
I
like
to
share
how
huawei
is
viewing
the
edge.
As
you
can
see,
cloud
is
in
the
data
center.
Then
we
have
the
edge
nodes
and
then
the
things
a
lot
of
iot
devices
we
see
edge
is
an
extension
of
a
cloud.
B
The
edge
nodes
are
located
at
edge,
but
they
are
managed
and
registered
from
the
cloud
applications
or
serverless
functions
are
orchestrated
and
deployed
from
cloud
to
the
edge.
So
this
is
as
that,
as
we
see
it's
a
cloud
extension
and
then
between
the
edge
and
the
cloud,
there
need
to
be
a
bi-directional
network
communication,
the
communication
sometimes
for
the
edge
it's
it
has
its
own
private
network
and
then
we
need
to
make
sure
the
bi-directional
communication
is
enabled.
B
B
B
On
the
other
hand,
a
decentralized
system,
the
edge
and
edge
can
communicate
with
each
other
without
even
being
managed
by
the
cloud.
Okay,
so,
lastly,
but
not
list
so
the
edge
node
itself
can
be
heterogeneous.
B
B
Yes,
like
I'm
qt
or
I
think,
like
okay,.
C
B
I
got
it
thanks:
okay,
yeah,
what
I
mean
is
the
the
etched
nodes.
The
machines
can
be
different
architecture,
heterogeneous
or
even
involve
gpus,
or
they
have
different
scale
settings.
But
we
as
a
goal
we
like
to
make
sure
our
system
could
generically
enough
and
cover
all
the
scenarios.
So.
C
How
far
down
in
terms
of
processor,
do
you
envision
these
going?
You
know
down
to
say
8-bit,
cpus
and
really
small
things
running
non-linux
operating
systems,
or
are
you
at
least
constraining
this
to
be
linux
everywhere?.
B
That's
an
excellent
question,
so
I
think
from
os
perspective.
Right
now
we
are
targeting
linux,
but
I
know
like
for
amazon:
aws
is
supporting
our
free
rtos,
but
it's
still
linux
based
and
for
microsoft
is
exploring
windows.
I
think
this
system,
the
you
know
like
for
kubernetes,
it's
supporting
any
architecture,
it's
just
a
kind
of
execution
work.
B
C
B
A
B
Yeah,
okay,
so
for
the
open
source
project,
the
kube
ad,
we
are
sharing
in
kubecon.
We
are
targeting
the
second
one,
but
as
a
huawei
production,
we
do
plan
to
support
the
first
one
as
well.
Yes,.
B
Yeah
yeah
thanks,
okay,
so
next
and
we
can
move
right:
okay,
okay,
so
here
I'm
just
sharing
a
very
simple
version
of
the
whole
architecture
design,
and
I
also
like
to
share
a
vision
we
see
about
this
coop
edge
project
is
we
want
to
see
everything
is
just
like
you
work
on
a
kubernetes
cluster.
B
Even
the
resources
are
located,
the
edge
node
is
located
remote,
and
then
we
have
a
special
component.
We
call
it
cool
bus
or
coupe
edge
bus.
This
one
enables
all
the
communications
bi-directional
and
multiplex
communication
tcp
communication
between
the
edge
node
and
the
cloud
and
even
like
in
production.
We
support
like
between
edge
to
edge
as
well.
D
B
Yeah
so
I
personally
wrote
the
edge
controller,
so
the
the
point
is
to
address
the
loosely
coupled
scenario,
because
you
know
like
nowadays
when
they
use
kubernetes
covalent
needs
to
every
10
or
30
seconds
need
to
report
a
heartbeat
to
the
coordinates
api
server
and
then,
as
I
mentioned,
that
edge
node
can
be
offline
as
well.
D
And
the
so
this
is
a
this
is
a
fascinating
slide.
Hopefully
we
can
maybe
kind
of
talk
talk
through
some
of
the
details
on
it.
How
okay?
How
does
that
edge
controller?
Compare
to
the
virtual
cubelet
idea?
Does
it
does
it
represent?
How
does
it
represent
the
edge
in
the
in
the
master
control
plane?
Is
it
just
a
custom
resource,
or
is
it
actually
a
proper
node.
B
It's
a
proper
node.
It's
still
the
same,
so
it's
just
hide
all
the
from
a
concept
perspective.
It's
similar
to
the
virtual
corporate,
though
all
the
at
each
at
node
is
represent
a
single
kubernetes
node.
We
don't
like
hide
anything
there,
so
each
individual
node
will
be
a
single
node
here
in
kubernetes
api
server
and.
D
So
there's
a
so
there's
a
second
there's
like
another
arrow
to
the
control
plane,
which
is
the
standard
cubelet
to
master
connection.
In
addition
to
the
sync
service
or.
B
Yeah,
okay,
so
I
think
I
think
also
okay,
so
the
app
engine
is
a
revision
of
kubelet.
We
wrote
so
the
this
app
engine
can
take
all
the
products
back
information
and
take
and
do
things
locally.
B
Another
point
is
we
want
this
app
engine
to
be
lightweight
because
as
much
as
possible
cover
later
has
a
lot
of
functionalities,
we
may
not
need
it
in
the
edge
scenario,
so
we
modified
it
and
then
it
can
take
action.
The
other
hand
is
as
similar
to
equivalent
it
can
do
the
plaque
actions
and
report
all
the
node
and
part
status
back
through
the
sync
service
to
the
api
kubernetes
cluster.
C
Is
this
essentially
like
running
static
pods,
where
you
know
a
static
pod
is
one
where
the
definition
of
the
pod
resides.
B
Okay,
I
I
think
it's
different
okay,
because
for
static
part,
there's
a
shadow
or
mirror
in
the
kubernetes
api
server.
We
don't
do
that.
Customers
can
still
dynamically
put
in
their
parts
back
and
then
we
just
regularly
make
sure
they're
synced
down
and.
C
So
I'm
also
interested
then
in
how
the
container
images
get
pushed
down.
Are
these
held
in
an
image
repository,
that's
centrally
managed
and
then
pushed
to
the
locations,
and
then
what
that
implies,
which
is
interesting
is
that
since
you're
non-homogeneous,
meaning
the
images
that
would
run
at
central
are
likely
x86,
but
the
ones
at
the
edge
could
be
arm
and
making
those
in
a
an
image
repository
is
something
that
I'm
not
sure
has
commonly
been
done.
Have
you
guys
got
a
prototype
like
that
working.
B
Yes,
yeah
we
do
so.
The
the
thing
is
back
to
your
question.
Is
you
know
we
use
kubocado
to
register
the
device
right.
B
B
It's
using
similar
to
docker
like
the
cr
and
basically
we
use
this
oci.
We
follow
the
oci
protocol.
C
And
then
at
the
central,
then,
can
the
central
still
modif
monitor
what
images
are
running
and
pull
off,
say
rolling,
updates
and
things
like
that.
B
So
for
the
rolling
update
we
haven't
tried
yet,
but
you
know
like
a
from
a
monitoring
perspective
using
couple,
people
can
just
do
what
we
are
currently
be
able
to
do.
Like
you
query
the
know
the
status
or
part
of
that
status,
so
the
philosophy
or
principle
we
are
trying
to
maintain
is
like
even
at
the
edge
scenario,
we
are
working
as
a
like.
A
normal
cluster.
Things
are
still
continue.
Working.
B
So
that's
a
way.
We
are
we
plan
to
open
the
source,
but
you
know
like
we
are
still
working
on
it,
so
the
target
is
like
for
the
kubercon.
We
will
then
to
announce
it
and
share
the
well.
B
D
D
Right
right,
I
just
see
it's
kind
of
a
like
a
you
know,
putting
the
flag
in
the
ground,
yeah
yeah
yeah.
I
mean,
I
think,
there's
a
lot
of
you
know,
there's
a
lot
of
meat
in
what
we
could
discuss
around
like
the
nature
of
the
way
that
controller
is
representing
the
resource
and
how
the.
B
D
The
container
content
is
pulled
things
like
steven
mentioned
around.
Like
do
you
use,
I
guess
the
biggest
overall
question
is
sort
of
how
and
where
have
you
needed
to
deviate
from
what
kubernetes
does
as
sort
of
default
concepts
in
order
to
achieve
the
functional
goals
you
needed
and
were
there
are
there
any
of
those
places
that
could,
you
know
essentially
be
turned
into
features
in
kubernetes
itself,
rather
than
kind
of
custom
adjacent
code?
B
I
think
that's
an
excellent
point
personally
from
a
pod,
spec
or
conflict
map
or
secret
all
those
kind
of
thing
perspective.
Well,
I'm
implementing
the
edge
controller.
I
don't
see
kubernetes
lacking
any
capabilities,
so
the
only
challenge
is
around
the
service.
Standpoint,
concept,
controversies
and
stuff
like
that.
B
We
you
know
like
in
the
address
scenario,
as
I
mentioned,
the
edge
node
can
be
in
the
private
network.
How
can
we
ensure
that
bi-directional
network
communication
is
a
it's
the
biggest
challenge
and
then
that's
why
we
enable
the
bus
and
then,
on
top
of
that,
how
can
we
enable
the
service,
endpoint
service
and
allow
customers
to
to
like
just
deploy
application,
then
use
service
to
enable
the.
A
B
Disk
recovery-
that's
one
thing
I
don't
see,
I
think,
kubernetes
some
features.
We
need
to
do.
B
B
D
B
Maybe
we
can
like
a
book
this
and
then
we
can
talk
in
detail
next,
because
I'm
personally
I
I
like
we
work
as
a.
We
have
a
team
here
right,
I
own
the
edge
controller
and
one
of
my
colleagues
or
two
they
work
on
the
bus
design.
So,
okay,
let's
say
you
have.
B
So
I'm
not
sure
if
I
get
your
question
so
the
the
point
is
on
the
clock
cloud.
There's
only
one
coupe
bus
pro
service
or
like
it
can
be
scaled.
Okay,.
C
B
D
I
got
it
thanks,
yeah
and
I
know
maybe
maybe
feel
free
again
to
deflect
this
until
we
have
the
person
who
maybe
can
answer
but
from
your
understanding
is
the
is
the
idea
of
the
bust
to
provide
something
that
is
at
the
kind
of
logical
message.
Layer
in
terms
of
you
know,
discrete
controller
messages,
or
is
it
to
provide
something
that
is
more
of
a
network
layer
like
pass-through
for
being
able
to
kind
of
reach
arbitrary
kind
of
cluster
ips
or
that
kind
of
thing.
C
Yeah
to
some
degree:
okay,
yes,
yeah
and
then
at
a
higher
level
in
the
network
stack
you
envision,
providing
a
service
with
message
queuing
so
that
if
connectivity
to
the
central
is
temporarily
out,
attempts
and
things
up
would
be
queued
rather
than
having
to
take
that
on
inside
applications.
B
So
the
the
the
idea
is
either
like
a
queue
or
we
have
a
storage
local
at
the
edge
and
on
the
cloud
right.
So
in
case
of
offline,
it's
disconnected
once
the
connection
is
rebuilt.
Then
the
tools
not
storage
can
be
synced
through
the
sync
service.
D
And
another
this
is,
this
is
great
stuff
by
the
way,
so
I
just
have
a
question:
the
the
metastore.
What
are
you
guys
using
as
a
backing?
Is
that
just
another
kubernetes
service
with
a
number
of
storage
plugins
or
what
is
what
is
that
exactly
running
so.
D
You
could
you
could
take
it?
Are
you
exposing
it
from
the
like
edge
controller
itself?
Are
you
speaking
fcd
using
client,
fcd
libraries
directly
in
the
edge
controller,
or
do
you
have
a
a
small
shim
running
as
like
a
meta
store
service
that
just
relays.
B
Yeah
good
question
so
add
controllers
calling
the
like
similar
to
the
controller
kubernetes
controllers,
calling
the
utcp
client
apis.
D
B
So
I
think
I
didn't-
maybe
I
didn't
see
it
very
clear
so
between
here
and
here
there's
a
protocol
right.
So
then,
as
long
as
the
protocol
is
insured,
then
behind
the
scene,
whatever
storage,
it
can
be
used.
C
Yes,
what
you're
saying
just
to
be
clear
that
that
disc
shaped
picture
down
in
the
edge
core
that
metastorage
edge
you're
saying
that's
an
it's.
That's
an
instance
of
etcd.
B
So
it
can
be
circulated
against
the
same
thing.
So
there's
a
protocol
here,
people
can
use
flicker
light
or
epcd
here
like
it
can
be
a
queue
well.
C
B
The
thing
is:
okay,
so
here
here's
my
personal
view
because
keep
in
mind
the
edge
node
can
be
really
light.
You
know
like
a
memory
constraint,
so
then
one
has
to
decide
what
kind
of
storage
you
want
to
use.
Yeah.
B
Yeah,
plus
often
time
like
the
the
requirement
on
the
edge
node
is
lower
than
the
cloud.
B
So
the
store
would
store,
for
example,
from
a
kubernetes
orchestration
perspective.
It
will
store
the
products
back
like
a
customer
put
through
the
cook
cuddle
and
also
the
the
node
information
or
the
metadata,
and
then,
on
the
other
hand,
from
the
app
engine
sync
back
then
through
the
controller,
it
also
store
the
the
part
status
and
node
status,
so
that,
like
it,
can
report
back
to
the
kubernetes
api
server.
D
D
You
have
this
app
engine
which
is
sort
of
a
derivative
of
the
cubelet
right,
yes,
and
now,
instead
of
having
that
communicate
over
kind
of
layer,
three
network
directly
to
the
the
api
server
in
the
cloud,
you're
kind
of
relaying
in
both
directions.
Some
of
this
some
similar
information
through
this
sort
of
alternate
path,
and
I'm
trying
to
I'm
trying
to
get
a
little
bit
of
better
understanding
of
why?
D
B
Okay,
yeah
I'd
like
to
hear
all
your
thoughts
as
well,
so
the
the
main
point
is
to
enable
the
loosely
couple,
because
we
assume
the
add
nodes
can
be
offline
and
reconnect,
on
the
other
hand,
is
for
all
the
data
they
are
metadata
right,
so
have
some
like
duplic
redundancy,
really
think
it's
fine.
The
the
other
one
I
want
to
say
is,
like
you
know,
for
kubernetes
api
server,
for
you
don't
have
in
the
in
the
in
the
api
server
in
the
etcd
tree.
B
D
Interesting
interesting
so
and
just
to
clarify
like
if
I
were
to
do
a
cube,
cuddle,
get
nodes
and
do
kind
of
a
listing
of
nodes
from
the
cloud.
I
would
see
all
of
the
edges
as
well.
Yes,
yeah.
B
I
think
that's
a
good
question,
so
I
steve,
I
would
assume
you're
thinking
of
when
the
edge
nodes
are
large
scale.
Then
or
like
you,
as
you
mentioned,
you,
you
you're
thinking
about
the
edges
scenario,
the
the
you
want
to
because
the
master
one
of
the
masters.
B
From
a
cloud
perspective,
we
plan
to
enable
multiple,
aha
capabilities
as
well,
just
like
what
kubernetes
is
enabling
right
now.
C
Yeah,
I
just
bring
it
up
because,
in
the
context
of
legacy
things
like
data
systems
that
were
iot
before
they
called
it,
iot
in
high
value
things
like
pipeline
controls.
It's
not.
D
B
So
I
I
think
the
other
one
is
conveyor:
okay,
rephrase
or
repeat
what
we
talked
earlier.
I
think
prison
asked
me
about
the
what
features
kubernetes
is
lacking
after
we
enable
this
architecture.
So
we
talked
about
the
the
kubernetes
service
and
then
steve
asked
about
the
the
image
repository
for
different
architectures.
B
Maybe
others
you
guys
can
help
me
understand
what
exactly
or
maybe
I
didn't
follow
what
his
question
about,
because
from
a
agile
perspective,
when
the
edge
node
configuration
is,
is
clear,
then
it
knows
what
to
pull
okay
and
then
I
think
the
third
one
is
about
h
a
dj.
Do
you
know
if
there
are
other
comments,
we
can
write
down.
A
Teach
us
one
question
about
the
the:
is
there
any
special
protocol
between
the
edge
controller
and
sync
service
on
on
some
higher
level.
B
D
A
D
B
F
Hey
cindy,
what
kind
of
workloads
do
you
support
on
the
edge?
Is
it
just
parts
or
do
you
support
other
workload?
Apis.
F
Okay,
so
when
how
do
you
so
from
the
central
plane,
when
you
use
cubectl
to
deploy
workloads
to
the
edge
like?
How
do
you
do
you
do
like
a
spray
like
when
you
create
a
part?
Is
there
an
object
that
replicates
it
on
all
the
edges
or
you
target
each
edge
one
by
one.
B
Is
a
scheduler
would
still
do
the
do
its
thing
and
then
figure
out
which
part
belongs
to
bind
to
which
node
right
and
so.
B
F
Okay,
so
I'm
probably
curious
as
to
why
you
chose
this
design
versus
running
a
full
controller
on
the
edge
just
for
a
lightweight.
B
You
mean
you
mean,
run
the
edge
controller
fully
on
the
edge
cord.
B
F
B
F
Okay,
okay,
that's
fine!
Okay,
yeah.
B
Yeah
and
that's
pretty
much
what
I
have,
I
think
we
can
talk
more
and
then
I
would
really
like
your
come
like
feedbacks
here.
I
think
I
heard
about
steve
and
pristin
mentioned
about
the
storage
using
edcp
or
q
or
sqlite.
Can
you
help
me
understand
like
what's
the
reason
you
ask
that.
D
B
E
D
D
Usages
and
that's
hypothetical,
I'm
not
saying
that
would
become
an
issue.
I
just
think
that's
that's
a
potential
area
where
you
would
want
to
divide
the
effort,
but
you
know
honestly,
it
could
just
be
it's
an
independent
fcd
instance
as
well.
It
just
sort
of
has
to
be
what's
mixing
the
the
task
on
on
one
ltd.
Okay,.
B
C
That
was
a
great
presentation
and
if
you're
going
to
be
presenting
at
cubicon
china,
I
there
and
love
to
learn
more
okay.
C
B
D
Yeah
and
I
know
there's
there's
you
know
legal
and
all
sorts
of
process
reasons,
but
if
there
is
a
there
are
way
for
some
of
us
just
in
this
working
group
to
get
a
preview
on.
B
D
B
D
B
Okay
sounds
good,
I
I
think
for
that.
I
need
to
circulate
back
and
then
I
need
to
ask
yeah
and
see.