►
From YouTube: Kubernetes WG IoT Edge 20181123
Description
November 23 2018 meeting of the Kubernetes IoT Edge Working Group - demo of Fog Atlas https://fogatlas.fbk.eu/
B
Okay,
give
me
just
a
second:
I
will
share
the
screen.
B
C
C
So
basically
the
idea
is
to
present
I
mean
the
classical
5w
question
for
for
better
understanding
what
what
is
fogatlas
first
of
all,
who
we
are
well,
we
are,
first
of
all,
a
research
unit
called
the
rising
rising
stands
for
robust
and
secure
distributed
computing.
C
We
are
basically
working
on
distributed
computing
and
on
the
cloud
to
see
continuum
from
a
research
point
of
view.
In
particular,
we
are
part
of
a
bigger
resource
center
that
is
called
the
foundation
bruno
kessler.
We
are
located
in
the
north
part
of
italy,
a
near
venice
video
in
india,
but
then
you
need
a
demo.
You
will
see
a
map
and
you
will
see
exactly
where
we
are
located.
C
D
C
A
automatic
or
semi-automatic
way,
and
you
should
also
be
able
to
model
the
workload
and
intelligently
smartly,
let
me
say,
place
that
workload.
Last
but
not
least,
this
is
still
experimental
as
a
feature,
but
we
introduced
it
in
fogatras
in
the
last
few
months.
The
idea
is
to
provide
also
a
sort
of
negotiation
over
the
resources
between
the
infrastructure
owner
and
the
developer
and
the
price
for
for
each
resource.
C
Focal
building
can
be
applied
in
different
verticals.
I
listed
here
some
of
these
verticals.
Probably
there
are
more.
What
I
will
say
now
is
that
we
focused
in
particular
on
two
verticals:
one
is
industry
4.0
and
the
other
one
is
a
smart
city
and
in
particular,
smart
city
is
the
vertical
that
you
will
see
during
the
demo.
C
C
E
C
In
different
venues,
in
particular,
they've
been
stuck
in
italy
and
the
fog
that
were
progressing
in
california
last
year,
and
we
wrote
also
a
couple
of
people,
some
forgotten
on
one
of
or
or
if
I
don't,
if
I
recall
correctly
on
both
of
them,
the
knee
was
not
forgotten
but
foggy.
That
was
the
former
name
of
fugatos.
C
You
have
the
reference
if
you
wanted
to
check
them
here
to
the
to
the
two
papers.
The
current
the
trl
over
forgotten
is
four.
So
basically
we
tested
it
in
a
lab
environment,
but
we
are
a
pilot
that
is
ongoing
with
an
industry
here
in
a
hidden
italy,
in
the
context
of
industry
4.0
and
for
the
future.
We
plan
to
for
sure
finalize
and
enhance
some
functionalities,
in
particular
the
ones
related
with
the
usability
and
front-end
and
to
experiment
to
to
improve.
I
mean
the
trl.
E
C
B
Yeah:
okay,
let's
start
with
the
demo,
but
before
give
me
a
couple
of
minutes
to
give
you
a
quick
introduction
on
the
context.
The
context,
as
I
said,
but
studio,
is
the
one
of
smart
placement.
Basically,
for
that,
with
focus
for
atlas,
we
are
trying
to
optimize
the
usage
both
of
the
physical
infrastructure,
but
also
you
try
to
respect
traditional
but
more
important,
non-traditional
requirements
imposed
by
the
application
developer
or
the
application
provider.
B
The
reference
scenario
is
the
one
of
the
smart
cities
and
the
use
case
that
can
be
enabled
are,
for
example,
the
one
of
traffic
control
city
securities
or
everything
that
is
that
involves,
for
example,
video
streaming
or
video
pattern
recognition
at
the
edge,
but
but
but
not
only
the
reference
infrastructure
that
you
will
see
is
an
emulated
infrastructure
in
our
lab,
but
represent
the
real
infrastructure
we
have
here
in
trento.
B
This
is
the
this
is
the
picture
that
summarize
the
infrastructure.
As
you
can
see,
we
use
this
infrastructure
to
do
experiments
and
to
do
demos,
and,
as
you
can
see,
we
have
five
regions.
Five
different
reasons:
each
region
represents
one
kubernetes
worker
node
or
an
aggregation
of
many
kubernetes
workers.
They
can
be
virtual
machine
hardware
or.
B
We
classified
the
region
in
three
years,
but
fogata
supports
also
more
more
tiers
and,
as
you
can
see,
the
tiers
are
used
to
classify
the
computational
resources
that,
as
you
can
see
on
the
top
are
higher
and
they
decrease
as
soon
as
we
reach
the
edge.
Also,
we
can
note
also
the
links
here.
We
have
the
links
that
connects
the
region
interconnects
the
region
at.
B
Of
course,
high
usually
have
high
bandwidth
and
low
latency,
but
instead
the
links
that
interconnect
the
edge
to
the
cloud
usually
have
low
bandwidth
and
high
latency.
Finally,
we
have
we
have
camera
that
are
directly
attached
to
the
region
that
are
that
are
at
the
edge
in
the
edge.
Here
we
will
see
the
deploy
of
a
particular
application
that
is
composed
by
three
microsoft.
Microservices.
B
Provide
a
specific,
specific
role
on
a
big
application,
so
we
have
a
chain
of
microservices.
If
you
look
at
the
picture
on
the
right,
we
have.
The
green
box,
for
example,
is
a
could
be,
could
be
a
microservice
that
extract
image
of
image
from
a
video
stream
of
a
face
of
or
of
a
plate
of
a
car
plate.
The
yellow
one
could,
for
example,
recognize
the
plate
or
the
face,
and
the
red
one
could
provide
some
analytics
or
machine
learning
or
anything
anything
that
does
not
need.
B
Then
that
does
not
need
the
street
strictly
require
a
strictly
network
requirement
towards
towards
the
endpoint.
That
is
the
camera.
We
classified
the
application
that
you
will
see
deployed
on
the
infrastructure
in
two
categories.
The
first
one
is
the
traditional
cloud
native
application
that,
in
this
application
we
does
not.
We
do
not
provide
a
requirement
on
the
network,
but
only
computational
computational
requirements
and
later
you
will
see
also
a
deployment
of
iot
and
data
centric
application,
in
which
we
provide
a
specific
requirement.
B
Specific
network
requirement
between
one
between
the
camera
and
the
com,
the
first
component,
the
first
micro
service
and
also
between
the
first
and
the
second
and
the
second
to
the
to
the
third.
As
said
before,
the
reference
scenario
is
the
one
of
the
smart
cities
and
yeah.
B
So
this
is
a
bit
too
these
lights
and
then
the
next
one,
who
won't
want
just
to
show
you
what
what
you
will
see
later
in
in
detail,
the
microservice,
the
name
of
the
microservice,
the
first
part
of
the
name
small,
represents
the
computational
requirements,
while
the
second
part
represents
the
network
requirements.
B
Yeah
and
the
the
green,
the
green
one
has
a
small
requirement
in
term
of
basically,
the
color
of
the
box
represents
the
computational
record
requirements.
While
you
will
see
later
the
capital
letter
inside
the
box
represents
the
requirement
of
networking
between
the
containers
between
the
micro
services.
B
Okay,
what
we
see
in
the
demo,
you
will
see
a
first
phase
in
which
we
will
deploy
a
traditional
application.
Three
traditional
application
composite
each
one
by
three
microservices
and
one
of
these
microservices
needed
to
connect
to
a
camera
camera
one
camera
two
and
camera
three
that
are
directly
connected.
If
you
remember
the
picture
before
that
are
directly
connected
to
region,
vela,
meziano
and
povo,
and
you
will
see
that
all
the
microservices
will
be
deployed
on
the
cloud.
B
While,
on
the
second
phase,
we
will
deploy
an
iot
centric
application
specifying
the
requirements,
the
network
requirements
between
each
component,
and
you
will
see
that
in
this
case,
all
the
microservices
will
be
distributed
on
the
infrastructure.
You
will
see
also
that
hard
constraint,
like
the
one
of
latency,
is
achieved
in
a
in
a
specific
and
particular
case.
B
B
B
B
Is
an
application
composed
by
many
microservices
and
currently
is
running
on
the
cloud
region,
but
to
avoid
the
complexity,
I
will
disable
the
listing
of
these
components
on
the
dashboard,
so
we
will
concentrate
on
on
the
components
that
are
specific
of
the
use
case.
As
you
can
see,
we
have
this
icon
representing
the
three
tier
we
we
showed
before,
and
we
can
see
that
the
in
these
three
heads
velar
museum
and
povo,
we
have
cameras,
we
simulate
cameras
connected
to
to
these
specific
regions.
B
Okay,
let
me
start
first
phase,
so
in
the
first
phase
we
will
deploy
traditional
application,
so
you
will
see
the
deployment
of
three
different
traditional
applications
composed
each
one
by
three
different
micro
services,
one
with
small
requirements
in
terms
of
computing,
one
with
the
medium.
B
B
Okay,
now
it's
completed
okay!
You
you
see
this.
This
script
here
on
the
shell
is
just
a
scripted
to
be
able
to
reproduce
the
demo
quickly,
but
behind
the
scenes,
this
is
using
our
our
command
line
interface
and
communicates
with
the
providers
api.
B
Okay.
So
now
I
want
to
deploy
iot
centric
application.
In
this
case,
we
are
submitting
to
the
systems
the
deployment
requests
that
are,
that
are
just
a
representation
representation
of
the
of
the
microservices
chain
specifying
inside
also
the
network
requirements
that
have
to
be
respected
between
all
the
components
and,
as
you
can
see,
for
example,
for
camera.
B
Two.
We
have
that
the
small,
the
the
micro
service
that
have
small
computing
requirements
is
deployed
on
this
small
on
this,
this
node
at
the
edge,
also
because
it
has
high
higher
network
requirement
in
term
of
bandwidth
and
latency
towards
the
camera
that
is
connected
to
these
to
these
to
this
region.
That
is,
camera
two.
The
intermediate
component
is
deployed
on
trento.
That
is
the
intermediate
region.
B
If
you
remember,
we
called
it
cloudlet
before
and
the
components
that
does
computation
high,
computation
or
analytics,
etc
is
deployed
on
the
cloud.
In
this
way,
the
the
the
bandwidth
usage
is
concentrated
on
the
edge
between
this
container
and
this
one.
B
Okay,
so
also
we
can
see
on
the
grafana
dashboard.
B
Okay,
we
can
see
on
the
graph
dashboard
that
we
have
three
components:
three
microservices
deploy
on
the
trento
region
and
just
one
on
the
remote
region,
the
edge
region,
and
we
have
three
components
also
on
the
cloud
we
can
see
in
grafana
they
through
this
goat,
we
can
see
the
in
percentage
the
sorry
the
computing
required
the
computing
status
of
the
of
the
region.
B
We
can
see
also
here
an
historical,
an
historical
view
of
these
computational
resources
and,
as
you
can
see
here,
we
have
the
network
resources
and,
for
example,
in
the
link
between
cloud
and
trento.
B
Here,
at
the
end
we
can,
we
can
show
you
also
the
difference
over
the
time
of
the
resource
pricing.
Here
we
have
the
pricing
yeah.
This
is
more.
C
Or
less
the
experimental
feature
I
mentioned
before,
just
to
give
you
an
idea
see
that
when
the
applications
are
deployed
on.
B
B
Okay,
so
just
to
complete
the
demo,
we
will
deploy
a
specific,
an
additional
final
latest
application,
with
a
specific
requirement
of
latency
between
the
first
component
within
the
camera
and
the
first
microservice,
and
also
the
first
to
the
second
think
about
of
an
example
of
this
application
could
be.
I
don't
know,
a
gaming,
a
gaming
application
at
the
edge
in
which
we
have
a
first
container
that
managed
the
streams
that
come
come
from
the
camera
and
the
second
container.
B
B
B
Yes,
okay,
so
now
we
can
see
that
we
specified
a
high
requirement
in
term
of
latency,
meaning
that
from
the
components
that
is,
that
talks
that
communicate
with
the
camera,
to
the
components
that
does
some
high
computing
calculation.
B
B
The
computational
resources
on
the
power
region
have
decreased,
but
we
should
see
also
that
the
price
of
this
region
increase
and
also
the
link,
yes,
the
yeah,
the
link
connecting
pogo
with
the
cloud.
Basically,
this
link
okay.
So
we
have
finished.
F
So
one
one
question:
I
don't
know
if
you
can
open
a
couple
of
the
deployments
to
show
a
little
bit
about
where
what
kubernetes
constructs
you're
using
in
the
in
the
services
and
routing
to
get
your
your
network
priority
design,
like
I
understood
the
I
understood,
sort
of
the
narrative
of
it,
but
where?
What
does
that
look
like
in
kind
of
yaml?
B
Me
show
you
just
a
second:
I
will
take
the.
B
B
Yes,
okay:
this
is
a
deployment
request
that
we
send
to
the
focus
api
in
this
deployment
request.
Basically,
we
specify
data
flows
between
each
between,
for
example,
this
data
flow
in
this
data
flow,
we
specify
the
network
requirements
between
the
camera
and
the
destination
might
be
the
destination
microservice,
and
here
we
specify
the
bandwidth,
and
here
we
specify
the
latency.
So
we
have
data
flows
are
basically
the
edge
of
a
graph.
Okay.
B
Of
representation,
graphic
representation
of
of
a
microservice
chain,
okay,
yep-
and
we
here
we
have
the
node
okay,
so
the
microservices
are
the
node.
And
here
we
specify
all
the
computer
computing
requirements
of
the
specific
microservice.
F
Okay-
and
I
see
there
that
that's
the
magic
line
there
right,
which
is,
has
the
entire
resource,
the
kubernetes
resource
in.
B
Yeah
and
yeah
we
currently
we
are,
we
are
including
the
kubernetes
spec
of
a
deployment
inside
this
object
that
is
treated
by
fog
atlas,
that
is
the
microservice
okay
and
based
on
the
network
requirements
and
the
computing
requirement.
What
does
the
fogatrous
orchestrator
is
to
add
a
specific.
F
And
then,
how
does
the?
How
does
the
route
with
the
the
requirements
for
network
that
you
have
described
in
the
first
graph
for
the
cameras?
Are
they?
How
so
are
the
services
using
the
the
traits?
I
mean
the
the
constraints
for
using
certain
services
with
like
inside
one
of
the
microservices,
that's
calling
so
your
source
to
destination.
B
That's
a
good
question
currently
in
this
this
prototype
we
only
deploy,
we
only
deploy
an
object,
that
is,
that
is
a
deployment.
Okay,
all
the
services
enabling
the
interconnection
between
container
have
to
be
deployed
before
or
have
to
be
included
inside
this
deployment
descriptor.
F
F
In
different
running
and
different
nodes
like
if
you
had
your
the
the
middle
tier
that
started
with
a
t,
I
can't
remember
the
name
but
that
that
middle
tier,
if
there
were
several
choices
and
a
camera,
had
multiple
possible
service
pods
to
reach.
F
C
C
C
C
F
I
think
so
so
that
the
alternatives,
the
way
the
the
graph
kind
of
resolves
with
those
alternatives,
is
you're
you're,
assuming
you
know
in
advance
the
performance
of
the
links
exactly
and
you
code
those
into
the
graph
as
opposed
to
something
that
is
more
runtime
or
dynamic,
where
it's
testing
multiple
edges
in
the
graph
to
see
which
one
is
the
the
best.
At
that
time.
Okay-
and
I
know
I
know
again
like
I've,
said
in
in
other
presentations-
this
is
a
early
stage.
E
This
is
harold,
so
I
understood
that
you
have
sort
of
a
fog
atlas
scapula
that
that
takes
that
takes
a
graph
description
that
represents
the
topology
of
your
application
and
then
it
it
adds
sort
of
node
selectors
to
the
yaml
file
that
you
that
you
then
give
to
them
to
the
kubernetes
scheduler
to
schedule
all
the
the
containers
is
that
correct.
B
That's
correct:
okay,
basically,
each
region
can
be
composed
can
be
composed
by
one
kubernetes
worker,
but
in
a
region
we
could
also
have
many
kubernetes
workers
that
are
interconnected
in
the
same
layer,
3
network,
okay.
So
basically
what
fog
atlas
does
is
to
schedule
and
to
place
the
container
inside
the
region?
Then
it
leaves
to
the
to
the
kubernetes
scheduler
decide,
which
is
the
specific
node
where
these
microservices
should
be
should
be
deployed.
C
C
Let
me
also
add
that
in
this
way
I
mean,
as
it
is
implemented
now,
fogatras
is
more
or
less
a
sort
of
wrapper
of
kubernetes.
That
means
that
we
wrap
the
scheduler
of
bernates
and
the
api
bernatus.
With
the
I
mean
something
that
add
this
distributed
functionality
in
terms
of
ability
to
handle
network
requirements.
C
E
So,
in
other
words,
with
the
deployment
file
that
you
pass
on
to
the
kubernetes
scheduler,
you
restrict
the
degree
of
freedom
of
the
scheduling
decisions
so
that
the
networking
requirements
are
always
fulfilled.
B
Yeah,
that's
the
point.
Yes,
okay,
yes,
we
usually
use
a
sort
of
template
of
kubernetes
descriptor
file
and
then
we
modify
it
in
order
to
pass
something
that
will
work
in
terms
of
network
requirements
when
the
kubernetes
scheduler
will
will
do
its
internal
calculation.
B
D
Okay
hi,
do
you
hear
me,
can
I
ask
a
quick
question,
maybe
sure
yeah,
so
I'm
my
name
is
laura.
I'm
working
for
a
sierra
wireless
company,
which
is
making
I
mean
my
team,
is
working
on
creating
iot
gateways
and
so
on
on
your
description
on
where
the
device
were
put
on
the
map.
D
D
And
I
was
wondering
what
kind
of
networking
links
you
were
using
between
those
tier
two
nodes
and
the
other
one
are
other
like
regular
heat
and
eating
or
more
like
fiber
or.
C
C
Emulated
in
a
lab
so
think
about
the
a
set
of
virtual
machines
that
are,
I
mean,
deployed
on
a
data
center,
okay
and
with
above
them
and
so
on
and
so
forth.
So
we
didn't
for
this
demo.
In
particular,
we
didn't
think
about
the
real
network
link
between
the
different
tiers,
of
course,.
D
D
So
you
maybe
you
focus
more
on
that
and
so
yeah
just
to
you
and
do
you
plan
to
have.
D
D
D
But
if
you
do
some
kind
of
features
like
getting
some
image,
processing-
or
I
don't
know-
maybe
other
things
using
the
camera,
but
those
services
are
deployed
using
kubernetes.
If
you,
if
you
appear
to
have
a
small
network
issues,
you
might
I
mean
that's.
My
experience
see
that
those
services
might
be
removed
from
the
from
the
node.
So
do
you
plan
to
I
mean.
D
Is
this
something
that
makes
sense
and
is
that
something
you
you
plan
to
like
some
of
the
services
as
built-in
outside
from
humanity
is
or
make
the
problem
disappear
with
a
custom
solution,
or
I
mean.
C
Yeah
very,
very
good
question.
Indeed
I
mean.
Let
me
try
to
answer,
I'm
not
sure
to
be
able
to
completely
answer,
but
of
course
you
touch
a
very
interesting
point
because,
because
we
are
in
a
distributed
environment
and
the
connectivity
cannot
be
always
guaranteed.
C
So
at
the
moment,
in
this
architecture
we
have
the
kubernetes
master
that
is
running
on
icloud.
So,
of
course,
if
something
happened
on
something
happens
on
the
link
between
the
cloud
and
the
edge,
the
kubernetes
master
is
no
more
able
to
to
talk
with
with
the
community,
the
kubernetes
workers
and
the
other
services.
C
In
some,
let
me
say
smart
way,
for
example,
in
order
to
keep
a
cache
of
the
video
streaming
that
is
collecting
from
the
camera,
then
probably
everything
is
working
because
I
don't
know
10
minutes
later,
when
the
network
leaner
comes
back
on
the
remote
part
of
the
application,
the
microservice
that
is
running
on
the
edge
can
again
connect
with
the
microscope
that
is
running
in
cloud,
and
so
they
can
start
exchanging.
The
water
was
in
cash.
C
Salem
education
is
not
true
for
the
kubernetes
services,
and
for
this
reason
what
we
are
thinking
about
we
didn't
implement.
Yet
this
is
to
distribute
all
the
kubernetes
so
to
have.
Basically
a
sort
of
I
mean
refrigerated
bernat
is
where
we
have
a
yes,
a
master
in
the
cloud,
but
probably
another
master
on
the
middle
tier.
C
D
Yeah
I
mean
that's
yeah,
that's
an
interesting
way
to
and
divide
the
problem,
or
I
mean
I
don't
know
to
say
that
properly,
but
to
reduce
the
constraints
and
put
yeah
things
I
mean
ensure
a
couple
of
things
are
growing
correctly,
where
you
need
who
you
need
them.
So,
okay,
thanks
for
the.