►
From YouTube: Kubernetes WG IoT Edge 20181109
Description
November 9 2018 meeting of the Kubernetes IoT Edge Working Group with a presentation on the use case of IoT edge followed by a demo
A
B
Sure
you
know
the
the
prep
for
kubecons
there's
two
of
them,
of
course,
but
shanghai
starts
immediately,
I'm
flying
out
in
less
than
24
hours.
So
that's
starting
next
week
and
there
are
some
what
I
did
is
I
searched
the
agenda
for
the
word
edge,
so
it's
possible
that
I
missed
some
edge
iot
related
things,
but
those
are
the
ones
that
immediately
showed
up
and
that
top
one
is
by
dan
cohen,
the
head
of
the
cncf
in
the
keynote,
so
that
that
session,
nearly
all
computing
is
edge
computing.
B
They
won't
in
my
experience
lately
they
they
will
record
keynotes
and
beyond
that
generally.
The
answer
is
no.
They
they
have
not
been
recording
sessions,
although
it's
it's
sporadic,
so
sometimes
they
get
recorded.
Sometimes
they
don't
in
general,
the
decks
will
be
posted,
but
that
that
is
dependent
on
the
good
behavior
of
the
speakers,
but
I'd
say
80
percent
of
the
time
you
can
get
a
deck
afterward.
B
I
don't
know
if
cindy
is
here,
but
I
think
that,
except
for
the
dan
cohen,
the
the
all
of
those
talks
are
with
her
organization.
I
don't
see
her
on
the
list,
though,
and
in
the
slack
channel
cindy
mentioned
that
she
was
going
to
be
there,
and
I
am,
I
don't
know
if
anybody
else
is
planning
on
being
there.
If
anybody
is
I'd,
invite
you
to
contact
me,
but
I'm
not
sure
how
you
can
best
find
me
because,
as
I
understand
it,
my
gmail
is
going
to
be
blocked.
C
Me
hi:
this
is
ito
speaking.
C
D
B
Blocked
by.
D
B
Anyone
with
normal
chinese
internet
access,
which
would
be
the
conference
and
the
wi-fi,
if
you
have
some
mechanism
for
a
vpn,
you
might
be
able
to
use
slack,
but
then
the
person
you're
trying
to
contact
may
not
be
able
to
use
it.
So
I'm
not
sure
how
reliable
it
will
be.
B
E
Steve
are
there
any
any
interesting
sort
of
soft
anecdotes
or
stories
from
the
heptio
team
about
edge,
as
given
they've
done
a
fair
amount
of.
B
Will
you
know
what
you're
referring
to
for
others
is
that
vmware
and
heptio
announced
an
agreement
to
join
it's
a
pre-announcement
that
you
have
right,
so
I've
been
in
a
bounce
advice.
E
E
I
will
be
interested
to
know,
given
that
they've
done
some
pretty
good.
You
know
in
in
the
field
in
production,
consulting
work,
what
what
they
will
bring
to
the
conversation
for
this
fast
yeah.
B
I
mean
certainly
they
bring
a
bunch
of
really
smart
guys
and
that's
always
useful.
B
Yeah
and
then
it
seems
like
the
trend
was
you
know,
I
started
working
on
kubernetes
when
some
of
those
people
were
still
there
and
I
don't
know
there
was
a
debate
on
who
is
going
to
win
container
orchestration.
But
I
still
remember
going
back
to
my
management
saying
if
there's
any
justice,
the
smartest
guys
in
the
room
are
on
kubernetes.
E
I
do
think
it'll
be
interesting,
and
this
I
think
this
is
kind
of
relevant
to
the
edge
conversation.
Is
that
there's
some
there's,
always
some
risk
of
kind
of
curated
forks
for
a
certain
purpose.
E
If
you
think
about
what
kind
of
happened
openstack
with
people
who
are
trying
to
find
ways
to
fork
and
add
value
that
that
there
could
be
some
edge
flavor
of
a
kubernetes
fork,
somehow
that
somebody
kind
of
tries
to
market
as
something
right
like
you
have
to
use
this
version
of
kubernetes,
because
we've
done
all
this
optimizing
for
the
edge.
B
There's
they're
already,
I
would
contend,
there
are
caps.
There
are
just
discussions
in
various
working
groups.
Not
all
of
them
are
in
this
one.
By
the
way
I
mean
so,
some
people
took
the
tactic
of
or
the
their
first
impression
they
being
engineers
were
that
this
is
the
component
of
kubernetes.
That
really
needs
to
be
modified
for
our
use
case.
B
You
know
whether
that
be
cube,
cuddle
or
you
know
what
whatever
whatever
that
component
might
be,
and
they
went
at
that
sig
and
proposed
whatever
change
they
wanted
to
make,
and
it's
more
like
a
an
approach
from
how
you
do
it,
rather
than
the
actual
use
case,
which
should
be
honed
in
this
working
group,
but
they're
scattered
all
about,
and
some
of
them
have
proposed
these
even
up
to
the
point
of
maybe
just
discussing
it
in
front
of
architecture
committee
meetings,
but
I
get
the
impression
that
it's
important
to
enough
some
of
these
people
to
the
point
where,
if
they're
told
that
they
can't
get
it
in
or
they
don't
like
the
crd
mechanism,
they
can
always
pick
up
their
cookies
fork
it
and
it's
just
the
way
open
source
works.
B
E
Yep
yeah,
I
was
describing
this
to
somebody
where
I
kind
of
liken
it
to
a
to
a
stew:
that's
always
evolving,
and
as
long
as
you
know,
someone
like,
google
or
or
some
kind
of
core
collective,
that's
doing
the
main
contributions,
keep
adding
enough
value
to
the
main
pot
of
stew.
Anyone
who
ladles
out
and
kind
of
does
a
you
know
derivative
they'll
keep
dipping
back
into
that
main
pot.
I
think
what
happened
to
openstack
is
the
peripheral
project
started,
adding
more
to
their
piece
than
the
main
line.
B
Yeah
and
people
who
build
other
things
in
the
tool
chain
couldn't
count
on
working
with
these
spin-offs,
which
was
pretty
ugly
and
the
other
thing
that
I
think
I
I
don't
care
whether
people
fork
it
so
much
is
whether
they
keep
it
open
source.
B
I
mean
that's
a
big
deal
and
I'm
not
sure
that
everything
that
went
down
in
certain
other
projects
like
this
stayed
with
that
mentality
of
you
know,
extending
it,
but
keeping
a
community
going
where
others
can
see
the
work
and
join
in
and
contribute
to
it,
and
as
long
as
that
happens,
I'm
I'm
in
favor
of
it
and
whatever
mechanism
people
seem
to
find
expedient.
E
Hey
I'm
seeing
harold
mueller's
here
and
I
think
we
had
a
note
from
last
time
that
maybe
the
siemens
lightning
talk
presentation,
harold.
F
F
Okay,
that's
good,
okay!
So
then
also
a
colleague
of
mine,
three
nut
has
joined
today,
because
what
we
would
like
to
do
is
to
give
first
a
short
introduction
about
an
explanation
about
the
use
case
and
then
and
then
srinat
will
give
a
short
demo
at
the
eclipse
con.
We
showed-
or
I
showed
a
video
of
that
demo.
We
tried
to
do
that
live
today.
So,
let's
see
whether
that
works.
F
F
We
see
a
desktop
and
nothing
on
it.
Okay,
now
you
should
see
a
full
screen.
Okay,
we
got
it
yes
presentation,
so
actually
I'm
not
sure
whether
I
should
call
it
a
use
case
for
iot
edge
computing,
but
it
it's.
It
is
a
project
or
what
we
call
seamless
computing
and-
and
I'm
gonna
explain
you
what
we
mean
with
that.
F
So
let
me
start
with
with
a
quick
example
and
I'm
using
that
often
to
motivate
internally
and
externally
why
we,
we
are
looking
in
in
this
thing
of
seamless
computing.
So,
as
you
know,
we
are
into
industrial
systems,
so
everything
we
talk
about
software
is
in
an
industrial
context,
so
it's
software
that
runs
in
plants
on
in
buildings
in
large,
in
civil
infrastructure
things
whatever.
F
So
so
it's
this
kind
of
systems
that
we
are
looking
at
and-
and
I
took
a
typical
example,
which
is
a
scada
system,
so
you
probably
all
know
scada
systems
and
it's
so
so
it's
a
distributed
software
system
that
has
a
couple
of
software
components
that
run
on
different
hardware
devices.
So
you
have
something
that
picks
up
that
picks
up
information
that
sits
in
the
field
somewhere.
F
Sometimes
it's
a
it's
a
plc,
sometimes
it's
some
sort
of
sensors.
You
usually
have
drivers
that
convert
protocols
to
something
that
is
understood
by
the
scada
system
internally
and
then
you
have
the
scada
core
that
sort
of
processes
all
the
information
it
gets
it
it.
It
creates,
alarms,
for
example,
if
some
values
reach
a
certain
threshold
or
for
combination
of
values
it
it
has.
It
has
a
historical
database
where
it
stores
all
the
events
and
then
finally,
you
have
also
a
human
machine,
interface
user
interface.
F
And
if,
if
you
look
at
where
this
components
are
running,
they
are
running
on
very
different
systems
or
kind
of
hardwares,
so
from
embedded
devices
in
on
the
very
low
level,
some
things
might
run
on
premise.
So
what
what
you
might
call
in
in
an
edge
domain?
There
are
some
things
that
could
run
in
a
sort
of
a
data
center,
maybe
that's
a
private
data
center
of
the
customer,
but
it's
sort
of
an
I.t
environment.
And
finally,
there
might
also
be
components
that
run
in
a
cloud.
F
Maybe
even
a
public
cloud
so
and
if
you
look
at
the
the
characteristics
of
those
hardware
systems,
you
realize
that
it's
it's
very
difficult
to
to
implement
and
to
operate
such
a
system
in
the
field,
because
you
have
those
software
components
in
these
very,
very
different
hardware
environments.
But
you
have
to
make
that
all
play
together.
You
have
to
update
all
of
that,
so
that
it
that
it
works
and
works
together.
F
You
have
to
deal
with
the
complete
life
cycle
of
of
this
complete
system
because
it
only
works
when,
when
the
system
works
and
when,
when
the
things
are
when
the
things
are
compatible
at
all
the
times,
so
that
is
sort
of
the
challenge
that
we
want
to
want
to
address
and
how
we
want
to
address
this.
F
This
is
what
we
call
seamless
computing,
so
so
the
vision
is
you
have
this
infrastructure
with
these
different
different
kinds
that
that
we
showed
earlier
so
from
embedded
devices
up
to
virtual
machines
that
live
in
the
cloud
and
on
the
other
hand,
you
have
a
distributed
application.
So
it's
it's
an
application
that
consists
of
several
components
that
interact
with
each
other.
That
talk
with
each
other,
so
they
exchange
information
and
they
they
they
compute.
F
Certain
things
with
this
information,
and
now
the
division
or
or
the
the
idea
of
seamless
computing
is
what.
If
there
is
just
the
guy
who
implements
this
complete
system,
he
just
implements
these
application
components.
He
provides
them
with,
let's
say
certain
characteristics
or
or
parameters,
and
then
he
throws
it
into
this
into
the
seamless
computing
automaton
and
the
cms
computing
system
would
take
care
of
where
to
run
which
application
component
in
in
this
range
of
infrastructure.
F
So
that
means
we.
We
would
like
to
achieve
that.
We
have
a
harmonized
software
environments
for
all
these
hardware,
domains
that
we
show,
on
the
left
hand,
side
and
so
that
when
you
develop
your
application,
you
don't
care
where,
where
your
application
component
would
run
at
the
end
so
where
it
would
run
that
is
determined
in
the
ideal
case
at
the
deployment
time,
but
maybe
even
later,
at
runtime.
F
You
might
be
able
to
change
that,
and
that
is
based
on
allocating
the
workloads
based
on
constraints
and
the
system
can
be,
can
be
a
kind
of
self-optimizing,
so
it
can
based
on
the
current
situation
of
your
infrastructure.
The
current
situation
of
your
application
components
find
the
best
solution
where
to
run
the
different
components.
F
That
may
sound
pretty
familiar
to
you,
and,
and
it
should
so
so
this
is
sort
of
you
could
you
could
also
call
it
fog
computing.
So
if
you
read
the
the
definition
of
the
open
fork
consortium
that
that
is
something
that
they
have
in
mind,
however,
there
are
other
definitions
of
fork
that
are
that
are
quite
different.
F
That's
the
reason
why
we
didn't
call
it
for
computing
or
why
we
gave
it
also
a
different
name
with
the
seamless
computing
to
stress,
to
stress
the
the
the
the
fact
or
the
the
characteristic
of
this
system
to
seamlessly
combine
edge
and
cloud
domains.
So
that's
actually
what
it
is.
So
we
want
to
make
it
fully
transparent
to
run
software
across
this
continuum
from
edge
to
cloud
and
okay.
F
So
this
is
more
like
the
vision,
or
this
was
more
like
the
vision
and
now
we
are
coming
closer
to
why
this
might
be
relevant
here
as
a
use
case,
because
we
also
have
started
to
to
implement
or
to
to
do
some
proof
of
concept
using
kubernetes
so
and
the
the
seamless
computing
concepts
applied
to
kubernetes
means
that
we
package
the
workloads
or
these
application
components
into
containers,
and
we
orchestrate
these
containers
using
kubernetes.
F
So
to
do
that,
we
we
build
or
we
build
a
cluster
that
spans
over
different
compute
domains.
So
there
it's
a
cluster
that
contains
nodes
that
run
in
the
edge
domain,
that
that
also
run
on
on
cloud
nodes
and
we
use
or
currently
use
the
existing
kubernetes
scheduler
using
some
of
the
scheduling
features
to
distribute
the
workloads
so
that
they
go
where
we
want
them
to
go.
F
So
that's
that's
the
basic
idea,
and
so
now
we
come
to
the
the
short
demo
and
what
we
are
showing
is
a
very
on
the
left
hand
side.
So
it's
a
very
simple
distributed
application.
So
it
consists
of
two
application
components:
the
lower
one
is
an
application
component
and
it's
a
it's,
a
very
simple
iot
app,
if
you
will
so
so.
The
lower
component
is,
is
a
component
that
reads
a
sensor
value.
So
that's
something
that
is
attached
to
to
an
I
o
where
a
sensor
is
attached.
F
F
You
could
assume
that
it
also
stores
it
could
has
some
sort
of
a
database
and
it
and
it
displays
that
value
using
a
graphical
user
interface.
So
that's
this
very
simple
two-component
distributed
application
and
on
the
right-hand
side
we
have
the
infrastructure,
so
it
consists
in
principle
of
three
nodes.
So
we
have
one
node
in
the
cloud
we
have
two
nodes
on
the
edge,
so
both
are
embedded
systems
and
one
of
these
embedded
systems.
It's
the
sensor,
node.
That
is
the
node
that
has
the
sensor
attached
and
the
second
node
is
also
an
embedded
system.
F
It's
in
both,
as
you
can
see,
raspberry
pi's
in
our
case
in
in
the
demo
that
could
also
run
containerized
workloads
and
now,
in
the
first
scenario,
that
we
show
you.
We
just
deploy
this
application
into
into
onto
these
three
nodes
and
what
what
you
will
see
will
happen
is
it
will
deploy
the
sensor
reading
component
to
the
node
that
has
the
sensor
attached
so
and
the
second
component,
the
user
interface,
will
go
to
also
to
the
edge
node.
F
That
is
because,
let's
assume
it's
it's
cheaper
to
run
it
there,
so
we
don't
need
resources
in
the
cloud.
It's
it's
lower,
latency!
So
that's
why
we
want
to
schedule
it
if
a
node
is
available
and
capable
to
run
that
user
interface
components
to
run
it
in
the
edge
as
well.
F
F
And
what
will
happen
is
so
the
the
user
interface
interface
component
will
be
redeployed
or
rescheduled,
and
now
the
system
picks
the
the
cloud
node
as
the
only
possible
node
where
it
can
run
at
that
moment
and
will
deploy
it
on
the
cloud
node,
and
we
will
see
the
system
up
and
running
again
after
some
time.
E
D
D
All
right,
so
do
you
guys
see
my
screen
yep
all
right,
so
so,
as
harold
mentioned,
this
is
a
very
basic
kubernetes
cluster.
So
so
I
installed
it
using
cube
adm,
so
the
master
node
runs
in
aws
and
you
have
two
minion
nodes
running
into
raspberry
pi's.
So
I
also
have
like
a
cube,
ctl
proxy
running
on
my
machine
so
that
I
can
access
the
ui
directly
from
my
machine,
so
nothing
fancy
stuff.
D
D
All
right
so
so
the
deployment
basically
has
two
components:
the
sensor
component
and
the
user
interface
component.
So
the
sensor
component
always
needs
to
run
on
the
on
the
raspberry
pi
node.
That
has
a
sense
hat
in
it,
and
this
is
this
is
achieved
by
using
node
affinities
and
node
labels,
and
the
user
interface
component
is
preferred
to
run
on
the
second
raspberry
pi
node.
But
if
that
is
not
available
it,
it
can
also
run
on
the
aws
node.
D
So,
as
you
can
see,
I
I
use
node
affinities
here.
So
it
says
that,
like
it's
preferred
to
run
on
the
compute
domain
on
the
edge
using
an
increased
weight
for
this,
but
it
can
also
can
also
run
on
the
aws
node.
D
So
now,
let's
see
if
the
ports
have
actually
started.
Yes,
they
have
started
right
now.
E
So
maybe
maybe
a
quick
question
as
you
go
through
the
demo,
because
it's
my
one
of
my
questions
relevant
to
that
affinity
piece.
How
do
you,
how
do
you
specify
the
architecture
in
the
image
in
the
container
image?
If
you
have
an
arm.
D
Right
right,
so
so,
basically
the
sensor
image
is
made
from
a
arm-based
docker
image.
D
D
I
don't
specify
that
here,
so
I
don't.
I
don't
specify
that
here.
So
the
thing
is
the
the
image
name
is
the
same
for
both
just
that
I
have
these
things
stored
in
the
local
cache
for
both
these
nodes.
E
Okay,
that's
that's
exactly
what
I
was
trying
to
get
to
is.
How
did
you.
D
D
Multi-Archimedes
do
that
yep,
okay,
so
the
deployments
have
started,
and
I
guess
now
you
should
start
seeing
the
ui.
Yes,
you
see
the
ui
and
you
see
the
values
coming
in.
So
that's
like
a
very
basic
deployment
here
and,
as
you
can
see,
if
you
go
into
the
ports,
the
sensor
node
runs
on
the
raspberry
pi
that
has
a
sensor
in
it,
and
the
user
interface
runs
on
the
edge
node.
D
Now,
let's
try
to
force
a
let's
try
to
force
a
rescheduling
here.
D
E
And
what
was
the?
What
was
the
end
point
you
were
hitting
was
that?
So
that's
is
that
a
proxied
port
to
the
master
to
get
the
like
where's,
because
if
the
service
is
in
the
centralized
cloud,
hosted
control,
plane.
E
Do
how
do
is
it
available
locally
local
to
the
node,
that's
running
it.
D
D
C
D
Yeah,
so
that's
basically
the
idea,
so
I
created
a
service
with
the
sensor
api
endpoint,
and
then
I
use
the
user
interface
to
hit
that
particular
service.
So
so,
at
the
end
of
the
day,
it
still
needs
to
go
through
the
master
to
hit
the
hit.
The
sensor
node.
D
Okay,
so
now,
let's,
let's
trigger
air
rescheduling
within
the
kubernetes
master.
So
for
this
purpose
I'm
gonna
delete
one
of
the
one
of
the
edge
nodes.
So
so,
like
I
said
so,
there
are
like
two
edge
notes.
One
is
one
has
the
sense
hat
and
the
other
one
is
a
normal
raspberry
pi.
D
The
sense
hat
cannot
be
deleted
because
the
sensor
application
needs
to
be
run
on
the
on
on
the
on
the
pi,
which
has
the
sense
hat
in
it.
So
let's
delete
the
other
raspberry
pi
node,
so
I'm
gonna
just
do
a
cube.
Adm
reset
here.
D
D
D
B
Yeah
yeah,
what
what
is
the
networking
plugin
you're
using
here.
D
So
it's
so
basically
what
I'm
doing
is
I'm
running
a
local
proxy
server
here.
So
what
cube
ctl
proxy
does
is
like.
D
Don't
know
that's
anyways,
so
the
idea
here
is
the
the
the
dynamic
residualing
of
applications
currently
in
kubernetes
you
can
achieve
it
by
triggering
triggering
a
condition
that
needs
to
start
the
rescheduling
because,
as
far
as
I
understand,
kubernetes
are
does
not
really
try
to
alter
an
existing
running
scenario
right.
D
If
we,
if
you
say
that
kubernetes
needs
to
reschedule
the
parts,
if
a
new
node
comes
in
and
that,
if
you
assign
the
applications
to
that
particular
node,
you
would
have
in
a
better
optimized
scheduling
scenario,
but
that
doesn't
work
with
kubernetes
cube
and
it
is
always
try
to
maintain
the
current
state
as
it
is,
and
only
if
you
actually
trigger
some
failure.
Scenario
like
I
did
here
by
bringing
down
one
of
the
nodes.
Kubernetes
would
actually
do
that.
E
Yeah
so,
for
example,
if
you
brought
back
the
kubernetes
raspberry
pi
note
on
the
edge.
D
B
Oh,
I
put
my
questions
in
the
chat,
so
I
guess
I
can
read
them
now,
but
the
first
one
is:
how
well
does
this
solution
work
when
your
edge
location
loses
control,
plane,
connectivity
and
suppose
the
edge
location
cycles
power,
while
this
happens
and
comes
back
disconnected,
is
your
use
case,
something
in
my
experience
a
typical
scada
use
case
would
expect
to
operate.
Even
if
this
connectivity
to
the
central
cloud
is
down.
D
G
D
So
I
would
say
that
the
applications
that
are
already
running
on
the
system
would
continue
to
run
just
that
you
wouldn't
see
that
node
in
the
in
the
cluster.
I
don't
think
that
would
probably
bring
down
the
applications
or
anything
like
that.
B
D
So
so,
like
I
said
so,
the
the
the
the
running
applications.
If
they
require
internet
connectivity,
obviously
they
won't
run,
but
if
it
is
something
that
can
run
on
on
the
system
without
it,
then
they
would
continue
to
run.
I
mean
just
that.
B
E
B
E
The
scenario
is
sort
of
abstract-
I
imagine
that,
especially
with
the
siemens
control
plc
gear,
that
all
of
the
edge
critical
functions
continue
to
run
as
they
were,
and
this
is
a
remote
visibility
kind
of
reporting
operation
that
if,
if
you're
losing
connectivity,
you
lose
the
remote
visibility.
But
it
doesn't
mean
that
the
primary
historian
and
plc
workloads
don't
kind.
B
Of
well,
if,
if
the
historian
is
running
as
a
kubernetes
workload,
then
it
is
important
yeah,
my
experience.
If
the
operators
lose
visibility
on
this
stuff,
that's
pretty
important.
G
F
So
so
I
think
I
think
you
would
have
to
look
deeper
into
the
scenario.
What
fails
and
what
comes
back
in
which
order,
and
something
like
that.
I
agree
that,
probably
not
in
in
the
most
cases,
it
would
not
be
a
sufficient
behavior
that
that
you
have
so.
But
let's
assume
that
the
nodes
come
back
and
also
the
connectivity
comes
back
so
then,
then
you
would
have
the
master
again
being
able
to
reschedule
things.
Of
course,
this
would
need
some
time
or
take
some
time,
and
then
you
would.
F
You
would
have
to
argue
whether
this
is
fast
enough
or
or
not.
I
think
so
why
we
used.
Currently,
we
used
this
single
cluster
set
up
with
the
master
in
the
cloud
and
and
and
some
of
the
notes
in
the
edge
is
more
like,
driven
by
the
fact
that
we
think
something
like
like
a
federated
clusters
are
not
on
the
same
level
in
terms
of
features
that
that
you
have
to
to
schedule
workloads.
F
In
the
end.
We,
I
think
you
would
need
something
that
that
also
enables
you
to
to
continue
to
run
even
the
the
the
workloads
in
the
edge
when
the
connectivity
is
down
to
the
to
the
master.
Be
it
that
you
have
sort
of
a
federated
cluster
and
you
have
one
master
in
the
edge.
G
G
D
Yeah
exactly
so,
that's
also
one
thing
that
I
noticed
here
is
that
let's
say
if,
for
some
reason,
the
node
goes
down,
kubernetes
would
automatically
restart
or
recreate
the
applications
in
the
existing
nodes,
and
then
later,
when
the
node
comes
back,
it
would
see
that
there
are
two
copies
of
the
same
application
running,
so
you
would
bring
down
one
of
them.
G
G
D
G
D
E
E
B
Networking
is
a
big
part
of
the
issue.
When
you
get
down
to
it,
I
mean
it
and
it
isn't
just
that
they
can't
re-establish,
but
even
the
security,
like
you
know
you,
if,
by
default
every
edge
node
can
get
it
every
other
one
and
you've
got
thousands
of
them
out.
There.
You've
got
a
big
security
issue
of
physical
security
on
these,
giving
a
vector
to
get
at
all
the
others.
Potentially.
E
I
did
I
do
have
another
technical
question
in
terms
of
when
you,
when
you
set
up
this,
did
you
just
use
kind
of
private
networking
and
have
direct
ip
access
to
the
master?
Did
you
use
any
kind
of
virtual
cubelet
right.
D
Right,
so
that's
a
good
question,
so
what
we
ended
up
using.
We
had
the
same
problem
that
the
the
master
nodes,
the
master
node,
need
to
be
able
to
reach
these
minion
nodes.
And
since
these
guys
are
inside
a
network
network,
we
had
to
create
like
a
vpn
tunnel
between
the
between
the
minion
nodes
and
the
master,
so
that
they
end
up
being
in
the
in
the
same
same
network.
E
E
So
it
basically
exposes
the
contract
of
the
cubelet
to
the
the
master,
but
through
essentially
a
pod
that
runs
in
the
cluster
as
a
proxy,
where
the
implementation
of
the
virtual
cubelet
is
entirely
an
implementation
detail.
So,
for
example,
you
could
use
you
know
some
other
secure,
like
mqtt
protocol
down
to
a
remote
node,
as
essentially
a
way
of
tunneling
that
communication
between
the
master
and
the
on.
D
E
D
Not
really
not
really,
I
haven't,
I
haven't
really
done
that
and
I
think
that's
that's
a
very
interesting
idea
and
I
would
definitely
be
interested
in
trying
it
out
here.
B
Another
question
for
kicking
around
is
whether
anybody
thinks
that
the
things
going
on
with
service
mesh
might
have
some
utility
in
these
edge
use
cases
as
a
means
of
supplementing
behaviors
in
the
default
cni
type
networking
at
the
lower
levels
of
the
network
stack.
E
Speaking
of
like
I,
just
I'm,
gonna,
I'm
gonna
chuckle
a
little
bit
at
these
naming
things
because
the
the
fault
you're
saying
how
this
is
sort
of
a
you
know,
distinct
facet
or
fork
from
fog
computing.
But
then,
if
you
look
back
ubiquitous
computing,
it's
probably
the
first
term
moniker
that
was
applied
to
this.
That
was
back
in
88..
So
this
is
definitely
not
a
definitely
been
kicking
around
in
many
forms.
For
a
long
time.
E
The
the
question
I
had
in
your
your
seamless
computing
architecture
was
you
had
this
all
the
way
down
to
kind
of
the
microcontroller
level?
What
was
what
was
at
least
the
imagination
of
how
you
might
tackle
something
that
was
crafted
as
a
kubernetes
workload
and
then
are
you?
Are
you
imagining
somehow
recasting
or
code
genning,
some
embedded
code
for
microcontrollers
based
on
that
or
what?
What
was
your
kind
of
initial
thinking
there.
F
Actually,
actually
not
so
so.
Currently
we
see
the
border
really
in
in
systems
that
are
able
to
run
containers
and
that
can
be
managed
using
using
kubernetes
with
that
implementation,
you,
you
might
think
of
extensions
to
that,
but,
but
currently
that
is
not
so
that
that
is
not
something
that
that
is
in
reach
of
our
of
our
thinking
in
the
next
time.
F
So
really.
Currently,
we
we
just
put
that
there
sort
of
with
the
with
the
understanding
that
we
know
that
that
systems
are
growing,
compute
capacities
are
growing
off
or
even
of
embedded
systems
and
and
things
get
get
bigger
and
more
capable
so
many
many
nodes
or
even
embedded
systems
that
that
maybe
today
are
sort
of
microcontroller
based
and
very
very
constrained
tomorrow.
Some
of
them
might
might
go
into
the
direction
of
running
standard
operating
systems
and
and
container
environments.
F
So
that's
the
way
we
are
thinking
currently
and
then
at
least
we
we
show
them
because
we
want
to.
We
want
to
make
clear
that
we
have
to
be
able
to
connect
them,
at
least
so.
They
are
not
really
part
of
the
managed
part
of
the
system,
but
they
are
part
of
the
of
the
overall
system
in
in
the
sense
that
they
they
can
be
or
have
to
be.
F
We
have
to
be
able
to
connect
them
and
deal
with
all
sorts
of
like
constraints
that
that
arise
from
that
fact,
like
we've
shown
with
their
sensors
connected
or
there's
a
plc
connected
to
certain
nodes.
So
we
have
to.
We
have
to
constrain
the
scheduling
of
the
software
components
to
to
those
locations
or.
E
So
it
does
seem
to
me
that
the
the
if
I
were
to
pick
one
thing
out
of
this,
that
I
would
sort
of
want
to
make
sure
we
capture
and
highlight
in
our
overall
work,
is
around
the
potential
value
of
a
more
aggressive
scheduler
capability
to
proactively
sort
of
rank
higher
the
value
of
rescheduling
to
rank
that
higher
than
the
interruption
that
that
that
would
incur.
If,
if
the
user
had
a
reason
for
that.
F
Yeah,
there's
maybe
there's
maybe
another
aspect
that
we
have
not
shown
in
this
demo
or
I
have
not
explicitly
mentioned,
but
one
thing
where
we
think
currently
the
kubernetes
scheduler
has
certainly
a
limitation.
If
not
a
blind
spot
is.
If
you
come
into
a
topic
that
I
would
call
latency
aware
scheduling
so
consider
you
have
a
distributed
application
that
has
some
latency
constraints
or
latency
optimization
demand
between
at
least
two
of
those
components
of
this
distributed
application,
and
you
schedule
these
independently
and
you
want
to
make
sure
that
kubernetes
keeps
these
constraints.
F
And
how
can
you
make
kubernetes
schedule
these
two
components
so
that
this
constraint
is
kept.
F
Yeah
that
so
I'm
we,
we
have
the
feeling
that
we
need
some
something
more
something
more
advanced,
hi,
guys.
F
H
We
did
a
sort
of
proof-of-concept
a
sort
of
prototype
that
managed
this
kind
of
situation
like
placement
of
containers
based
on
the
on
requirements
between
two
or
more
micro
services
or
containers,
and
these
requirements
could
be,
of
course,
traditional
requirements
like
the
one
already
managed
by
the
kubernetes
scheduler,
like
cpu
ram,
etc,
but
we
also
added
some
additional
constraints
that
are
based
on
network
capacity,
meaning
latency
bandwidth,
etc.
H
We
are
preparing
a
self-contained
demo
environment
that
will
let
people
try
what
we
did
in
in
their
whole
laptop.
But
it
is
not.
It
is
not
really
yet
but
yeah.
We
we
are
available
to
to
demo
or
to
show
you
what
we
did.
We
also.
H
We
also
worked
on
scenarios
like
the
one
you
you
presented
in
your
demo
and
like
the
one
when
you,
when
you
detach
the
connectivity
when
you
cut
the
connectivity
between
the
the
master
and
the
edge
node,
and
what
we
experienced
is
in
fact
is
that
all
the
services
provided
by
kubernetes,
for
example,
the
dns
service-
it's
not
it's
not
anymore
available,
so
meaning
that
two
microservices
running
on
the
edge
cannot
talk
anymore
to
each
other
because
they
cannot
contact
the
dns
that
is
probably
deployed.
A
H
I
will
give
you
a
confirmation
early
next
week,
because
we
have
a
lot
of
things
to.
A
H
A
But
I
was
thinking
in
all
these
talks
that
that
this
might
be
a
a
good
start,
as
you
said,
like
one
of
the
missions
for
this
group
would
be
to
document
all
these
use
cases
create
a
demo
demos
like
this
and
and
then
and
then
you
know,
try
to
to
move
things
forward.
A
I
think
you
know,
if
there's
a
will,
you
know
just
to
start
with,
with
you
know,
kind
of
creating
like
a
small,
open
source
project
out
of
these
two
or
putting
it
up
somewhere
on
the
github,
where
people
can
can
can
start
playing
with
really
good
things
like
this.
I
don't
know
what
you
guys
think.
F
Yeah,
I
think
it's
a
good
idea
and
as
preston
as
you
said,
it's
it's
a
very
simple
demo,
but
it
already
shows
or
brings
up
some
of
the
from
some
of
the
challenges
that
that
you
have,
and
I
think,
if
we
construct
a
couple
of
those
simple
things
we
will,
we
will
bring
up
all
the
the
the
things
that
we
are
hopefully
most
of
the
things
that
we
need
to
that.
We
need
to
solve
and
and
extend
kubernetes
with
to
to
manage
all
these
things.
E
A
E
D
But
that's
on
the
planetarium.
B
Conference
starting
that
week
and
I've
got
it
sandwiched
in
between
some
other
things
and
we're
recording.
E
B
F
Yeah,
I
can
do
perfect
thanks.
I
can
do
we
also
started,
I
think.
Last
week
we
got
the
comment
already
started
to
to
kick
off
our
internal
process
that
we
have
to
follow.
We
put
something
of
the
of
the
code
of
this
demo
online,
so
hopefully
that
that
will
work
out
in
in
a
couple
of
weeks.
A
So
so,
I'm
sure,
as
a
as
a
working
group
do
we
have
oh,
I
think
we
should
have
some
space
to
to
put
the
material
like
like
this
right
and
and
present
it.
So
so
maybe
we
can.
We
can
try
to
do
that
under
the
umbrella
of
this
working
group.
If,
if
yeah,
if,
if
that's
okay,
we
with
the
siemens
as
well
is,
is
there
like
a
main
contributor
for
now
of
of
that
piece
of.
A
So
maybe
to
try
to
find
a
github
place
under
the
this
working
group
that
that
where
we
can
host
the
material
for
for
this
demo,
for
example,.
G
B
The
google
doc
and
less
formality
than
github,
where
you're
going
to
have
to
under
the
kubernetes
project,
to
point
approvers
and
reviewers.
D
B
A
B
Yeah
and
if
somebody
was
thinking
about
one
that,
based
on
my
experience
as
poor,
you
can
post
doc
links
in
slack,
but
slack
isn't
really
good
for
searching
for
things,
and
so
I
don't
think
that's
a
great
solution,
but
putting
it
in
our
meeting
notes.
For
now.
It's
small
enough
that.
A
E
A
E
Thanks,
harold
and
sarat
for
presenting
that
was
great.
F
H
C
A
Yep
go
ahead,
it
says
there
was
a
problem
in
the
in
the
I
notice,
as
well
in
in
the
calendar
link
is.
Is
that
what
you're
referring
to
yeah.
A
Yeah,
I
I'll
change
it
next
week
just
to
make
sure
that
the
calendar
invitations.