►
From YouTube: Kubernetes WG IoT Edge 20190327
Description
March 27 2019 meeting of the Kubernetes IoT Edge Working Group - presentation on AMQP
B
Can
you
see
that
the
red
screen?
Can
you
hear
me
yep?
It's
all
good,
excellent,
fantastic.
Let
me
apologize
first
because
if
you
hear
any
snoring
in
the
background,
that's
actually
my
dog
I've
been
through
this
presentation
many
times
and
obviously
bore
her
to
death
at
this
point.
Hopefully
that
won't
be
a
common
reaction,
but
there
you
have
it.
I
am
a
developer
on
the
apache
cupid
project,
that's
a
project
that
deals
with
open
source,
implementations
of
amqp
based
messaging
components.
B
I
also
am
a
member
of
the
openstack
foundation
core
committed
to
their
messaging
library.
Also
messaging.
I've
been
in
the
telecommunications
field
for
for
more
years
than
I'd
like
to
admit,
but
my
current
interests
now
like
really
large-scale
parallel
distributed
communication
systems.
So
today
I'm
going
to
talk
about
the
amqp
protocol,
which
is
where
I've
been
doing
a
lot
of
my
current
development.
B
This
presentation
is
broken
up.
It's
two
sections:
the
first
part
of
the
presentations
we'll
talk
about
what
the
protocol
is
and
what
it
provides
to
developers
that
want
to
implement
messaging
systems
and
the
other
part.
The
second
part
will
be
more
of
like
a
stepping
back
and
looking
at
how
nqp
is
deployed
in
a
massively
distributed
cloud
scenarios
and
what
components
and
infrastructure
tools
are
out
there
to
do
that.
So
this
conversation
will
kind
of
start
with
small
bits
and
then
build
up.
B
So
if
any
point
I'd
be
say
something
that's
unclear,
please
stop
me.
I
don't
want
to
lose
anybody
and
you
know,
ask
questions,
feel
free.
I
think
there's
actually
a
link
to
these
slides.
Someone
will
make
available
as
we
go
forward.
So
with
that
said,
let's
start
with
quick
background.
B
Excuse
me:
amqp
stands
for
advanced
message,
queuing
protocol
and
the
word
queueing.
There
is
in
metals
guys
because
when
it
was
originally
designed,
it
was
designed
as
a
client,
server
protocol
and
server
being
the
broker
and
two
communicating
clients
had
to
communicate
through
a
broker
through
a
queue.
That's
no
longer
the
case
that
was
back
in
the
stupid
ages.
B
When
this
thing
first
started
and
seemed
like
a
good
idea
to
have
all
your
messages
go
through
a
single
point
that
doesn't
scale
up
well,
as
you
can
imagine
right
so
queueing
is
possible,
but
it's
not
built
into
the
protocol.
We'll
talk
about
that
some
more.
It
was
started
back
way
back
in
2003
by
a
consortium
of
banking,
indus
companies
led
by
jpmorgan
chase.
Their
purpose
was
to
create
an
open
standard
for
middleware
messaging
that
could
be
shared
in
the
banking
industry
and
that
kind
of
started
the
ball
rolling.
B
Then
not
only
were
they
working
on
the
standard
this
paper,
but
they
were
also
trying
to
prove
the
pudding.
If
you
will,
they
created
an
apache
cupid
project,
that's
a
project,
I'm
currently
a
committer
on
for
actually
implementing
the
protocol.
Client,
libraries
server,
libraries,
a
lot
of
brokers,
things
like
that
to
prove
it
out.
So
there
are
many
iterations
over
the
years
of
improvements
feedback
and
to
developing
this
this
protocol.
B
Eventually
this
this
group
of
this
group
of
interested
parties
went
far
beyond
just
the
the
eu
banking
industry.
It
expanded
like
something
or
like
20
contributor
entities
at
the
time.
So
it
was
disbanded,
as
this
kind
of
informal
thing
and
created
a
and
it
was
re
re-established,
an
oasis
working
group
companies
such
as
red
hat
and
microsoft,
vmware
and
the
others
worked
on
pro
producing
the
standard
it
was
eventually
produced
in
2014,
as
version
1.0
and
there's
links
here
to
the
the
actual
standard.
B
It's
an
open
standard.
It's
not
encumbered
right.
You
can
link
a
click
on
that
link
and
you
can
actually
develop
it
for
yourself.
Learn:
learn
about
the
other
protocols
to
find
and
there's
a
working
group,
which
is
the
second
link
that
continues
to
to
add
value
and
refine
the
various
parts.
So
as
a
developer,
as
somebody
who
wants
to
develop
application
send
messages,
what
does
it
bring
to
the
table?
B
Well,
first
of
all,
it's
a
middleware
application
level
protocol
that
is
not
meant
to
be
implemented
as
part
of
an
operating
system
or
from
a
particular
vendor
is
os
independent.
It's
all
at
the
application
level,
it's
layer
7..
The
only
thing
that
it's
really
depends
on
is
a
reliable
network
transport
and
that's
typically,
you
know
tcp,
but
aside
from
that,
it's
it's
totally
user
space
and,
of
course,
it's
an
enterprise
messaging
system.
B
That
kind
of
fan
out
pattern
supported
and,
of
course,
two
right
brokers,
storm
forward
messages
and
since
it's
an
enterprise
protocol,
it's
it,
it
builds
in
very
strict,
defines
very
strict
flow
control,
so
you
have
efficient
use
of
of
your
resources
and
and
and
don't
overrun
consumers
and
talk
about
that
in
a
minute,
and
as
I
mentioned
before,
it's
a
peer-to-peer
protocol,
it's
it's
it's
it's!
It's
symmetric
right!
There's,
no
client
server
action,
that's
defined
by
the
application.
The
protocol
itself
is
just
two
peers
going
to
the
next
out.
B
Security
domain
starts
at
the
connection
level
at
the
tcp
level,
and
it
involves
layers
of
of
of
security,
the
encryption
layer,
typically
it's
defined
as
tls
as
a
cell,
that's
laid
on
top
of
an
authentication
layer
that
can
be
used,
that's
sassl
based,
so
it
allows
different
types
of
authentication
mechanisms
like
kerberos,
and
things
like
that
replay
and
login,
if
you're,
using
encryption,
underneath
or
above
so
security
at
the
connection
level,
and
the
other
thing
is
when
you
have
multiple
different,
disparate
languages
and
clients
all
with
their
own,
you
go
you
have
you
have
ruby,
you
have
all
these
things
trying
to
communicate.
B
You
want
a
way
of
accurately
accurately
describing
the
data
you're,
trans,
you're,
you're,
transferring
and
that's
a
type
system,
that's
defined
by
the
protocol
that
defines
basic
types
like
integers,
different
types
of
integers,
strings
flows
and
time
straps
as
binary
types
right,
nothing's,
converted
to
ascii
and
sent
as
ascii.
It's
all
you
know
compact
and
binary.
It
also
defines
higher
level
container
types
like
maps
and
lists,
so
you
can
build
these
snow
very
deep
and
structured
datagrams.
B
The
interesting
thing
here
is
that
lisps,
these
lists
and
maps
are
not
they're,
not
type
strict.
In
other
words,
you
can
have
a
list
that
has
a
string
as
its
first
element.
An
integer
is
a
second
a
maps,
its
third
they're
polymorphic,
and
it's
also
a
self-describing
codec,
which
means
that
you
can
start
parsing
this
binary
body
and
determine
you
know
what
its
structure
is.
B
B
Cool
cool,
okay,
so
all
the
features
of
the
protocol.
Obviously,
when
you
have
an
application
that
sends
a
message,
it
usually
wants
to
know
whether
the
message
got
there
or
if
it
was
rejected
or
failed,
and
that's
also
built
in
to
amqp
protocol
as
it's
a
peer-to-peer
protocol
it.
It
essentially
has
acknowledgement
that
travels
and-
and
that
is
the
sender,
sends
the
receiver,
the
acknowledgements
proxy
back.
B
The
other
point-
that's
key,
is
that
the
addressing
is
layer.
Seven
it's
addressing
is
is
is
controlled
determined
by
the
application
itself.
It's
not
network
addresses
right,
you
use
ip
addresses,
you
use,
domains,
hosts
and
ports
for
setting
up
your
tcp
connections,
but
the
messages
that
are
addressed
when
you
send
a
message
you
publish
it
to
an
address
or
when
you
want
to
subscribe
to
a
certain
type
of
message
you
subscribe
to
an
address
that
address
has
nothing
to
do
with
the
network
address
right.
B
It's
simply
an
identifier
used
by
the
application
to
identify
the
source
and
consumers
of
a
service
right.
It's
like
a
name
for
service.
Think
of
a
name
of
a
cue,
for
example,
is
a
good
example
of
that.
This
the
protocol
defines
them
simply
as
strings
so
they're
very
flexible
and
how
they
can
be
how
they
can
be
used.
B
The
other
thing
is:
it's
got
to
have
efficient
use
of
network
resources
right.
Nobody
wants
to
create
a
socket
per
per
communication
flow
right.
Not
if
you
can
cover
up
your
connections
right
and
multiplex,
multiple
communication
channels
within
them,
and
you
also
don't
want
to
have
blocking
necessarily
right.
You
want
to
be
able
to
send
your
messages
in
batches
and
and
deal
with
that
acknowledgements
later
at
a
later
time
and
not
necessarily
block
at
the
protocol
level.
So
it's
totally
asynchronous.
C
Hey
can,
can
I
ask
a
question,
please
say
cindy
so
you
mentioned
like
amqp
support
async
like
pops
up
message
messaging.
So
then,
how
do
we
ensure
the
act
acknowledgement
you
know
for
mqtt,
there's
a
course
like
different
levels
to
ensure
the
the
the
right
delivery
between
the
server
and
client.
What's
the
mechanism
for
amqp.
B
So
I'm
going
to
get
more
into
the
actual
deployment
going
forward.
Amqp
only
defines
a
point
to
point
mechanism
right
for
acknowledgement
flow.
However,
the
intermediaries
are
allowed
to
provide
their
own.
If
you
will
fan
in
summary
of
acknowledgments
right
and
there's
a
feature
in
the
router
that
actually
does
that
it
has
different
types
of
for
the
different
types
of
classes
of
acknowledgements
that
come
back.
It
has
a
way
of
kind
of
summarizing
them
for
you.
B
If
there's
like
hard
errors,
it
will
forward
that
along
indicating
that
something
failed
or
if
anybody
accepts
it,
it
forwards
that
along.
So
you
know
that
the
message
was
delivered,
but
it
doesn't
do
any
counting
or
or
things
like
that,
it's
not
a
it's,
not
that
deterministic.
C
So
does
it
have
a
queue
like
on
the
server
side.
B
B
Oh-
and
the
last
thing
I
wanted
to
point
out
here,
is
that
it
defines
per
message
priority,
which
is
useful
if
you've
got
a
lot
of
traffic,
that
is
say,
low
priority
like
metrics
and
monitoring
that's
issued,
and
you
have
an
alert
something
high
priority
that
needs
to
be
processed
priority
per
message.
Processing
per
message:
priority
is
a
signal
to
the
message:
bus
to
prioritize
the
delivery
of
a
high
priority
over
low
priority
messages.
B
Okay,
so
when
we
show
examples
of
deployments
we'll
talk
about
that
a
bit
more,
so
I
had
a
hard
time
naming
this.
What
I
wanted
to
kind
of
do
is
is
show
you
the
various
components
that
amqp
defines
the
model
and
the
objects,
but
I
said,
like
I'm
kind
of
focusing
on
the
way
the
protocol
operates
at
the
wire
level
in
this
picture.
The
the
the
top
most
of
the
object
is
a
connection
right.
This
corresponds
to
your
tcp
transport.
B
The
connection
is
then
further
subdivided
into
sessions,
one
or
more
sessions
and
a
session
is
kind
of
a
resource
and
control.
Sandbox
multiple
sessions
are
unrelated
in
that
they
can
carve
out
their
own
bandwidth
and
reserve
it,
and
we'll
talk
about
that
a
little
bit
going
forward,
then,
within
a
session
there's
one
or
more
links.
B
This
is
where
messages
is
the
concept
of
when
messages
are
transferred,
they're
transferred
over
links,
so
as
an
application
programmer
you're
going
to
have
a
lot
of
exposure
to
dealing
with
link
operations,
the
links
are
unidirectional.
That
is
only
in
terms
of
the
direction
travel
of
the
message.
Control
information
like
acknowledgements
travel
back,
but
the
message
itself
is
unidirectional,
which
means
that,
on
one
end
of
the
link,
you
have
a
role.
That
is
a
sender.
It
produces
messages
to
the
link.
B
On
the
other
end
of
the
link,
you
have
a
role,
that's
a
receiver,
it's
consumed
messages.
The
terms
used
in
the
protocol
are
source
of
source
of
messages
and
a
target
of
consumer
messages,
and
you
might
see
that
go
through
the
document
a
bit
and
I'm
going
to
talk
a
little
bit
more
detail
on
how
this
whole
session
and
links
things
work
going
forward,
and
the
last
thing,
obviously,
is
it
defines
a
message
update,
even
which
I'll
show
you
a
little
more
detail
on
later.
B
As
I
explained
before,
you
have
a
tcp
connection
over
which
one
or
more
links
are
created
in
this
case,
I'm
showing
you
a
single
link
in
the
top
most.
You
have
two
applications,
two
processes
that
need
to
communicate.
The
first
step
is
to
establish
tcp
connection
and
do
oer
authentic
authentication
authorization
encryption
at
that
level,
and
then
a
link
is
established
through
a
negotiation
process
between
a
source
and
a
target,
and
that
link
is
then
used
to
send
messages
from
the
source
to
the
target
and
acknowledgements
travel
back
right.
B
The
target
can't
send
messages
back
to
the
source
on
that
link.
To
do
so,
it
would
actually
have
to
create
its
own
source
in
its
own
target.
So
if
so,
to
do
duplex
communication
is
the
bottom.
The
second
picture
denotes
that
you
have
to
actually
have
two
links
in
the
case
of,
say,
an
rpc
client.
It's
going
to
send
requests
from
its
source
to
the
target
server
and
the
server
is
going
to.
B
You
know,
process
that,
and
it's
going
to
turn
around
a
response
and
in
the
message
it
actually
has
the
address
of
where
to
send
the
response
to
which
happens,
to
be
the
rpc
client
on
the
left
side.
Links
are
actually
very
lightweight,
they're,
really
just
an
identifier
in
a
message
frame.
So
having
multiple
links
to
do
that
are
it
does
not
reduce
a
lot
of
vote
for
head.
So
that's
how
you
do
full
duplex
in
a
single
transfer,
single
duplex
transfer
messages.
B
Now
so
taking
a
step
back,
I'm
taking
that
picture
from
the
previous
slides
and
kind
of
looking
down
peering
down
the
connection,
all
right
and
this
structure
you
see.
The
connection
in
this
particular
example
is
covered
up
in
two
sessions.
B
One
session
is
denoted
as
the
bulk
session.
That's
where
links
that
send
bulk
data
are
are
are
contained,
there's
also
a
control
session
that
contains
links
for
high
priority
occasional
traffic
right.
What
you
don't
want
to
do,
is
you
don't
want
the
bulk
sessions
banned
to
consume?
All
your
bandwidth
right,
you
want
to
be
I'll,
have
some
reserve
for
the
control
session.
That's
what
sessions
do
right
sessions
provide
a
way
of
preventing
one
set
of
clients
from
overwhelming
another,
and
this
all
goes
to
flow
control
amqp.
B
B
You
can't
have
more
than
outstanding
number
of
bytes
across
all
messages
across
all
links
like
it
starts
in
the
default
like
four
gig,
but
once
you
set
your
window
through
configuration,
you
can
negotiate
that
and
obviously
set
it
down,
so
that
protects
the
memory
of
memory
space
of
your
application.
B
B
It's
all
very
important
people
don't
tend
to
think
about
flow
control,
but
without
it
you
will
not
be
able
to
scale
all
right.
You
generate
these
large
backlogs.
If
something
should
go
wrong,
that
increases
latency.
That
means
things
like
rpc
time,
I'll.
Try
to
re-transmit
things
get
worse
all
right,
so
you
really
have
to
have
a
well-designed,
well-thought-out
flow
control
protocol
and
that's
provided
on
two
levels:
bytes
for
memory
footprint
and
link
per
link,
medi
message,
flow
control
for
controlling
overrun
and
preventing
senders
from
from
overrun
and
the
link
code.
Control
is
very
simple.
B
It's
just
credit
based
right
over
the
link.
The
the
consumer
will
give
credit
to
the
source.
Credit
is
essentially
the
number
of
messages
it's
allowed
to
send
at
any
time.
So
in
this
case,
the
target
has
communicated
back
through
the
link.
You
can
send
me
three
messages.
The
application
is
send
three
messages,
as
the
application
sends
three
messages:
it
decrements
its
credit
count
and
once
that
goes
to
zero,
it's
blocked.
B
This
gives
the
application
time
to
process
those
messages.
Once
it
gets
to
a
point
where
it's
processed
enough
of
those
messages
and
frees
up
resources,
it
can
then
send
back
credits
to
the
source.
Now
this
is
what
is
application
programmer
you're
gonna
see
a
lot
of
this
you're
gonna
deal
a
lot
with
credits.
The
session
stuff
is
like
set
it
and
forget
it
kind
of
think
provision.
Forget
it,
but
credits
are
more
dynamic.
B
The
last
part
of
the
protocol
I
want
to
talk
about
is
actually
the
message
body
themselves.
The
message
payload
is
divided
up
into
three
sections
that
are
mutable
the
message:
bus
can't
change
these
sections,
they're
created
by
the
sender
and
they're,
received
exactly
the
same
way
by
the
receiver.
B
The
three
sections
are:
there's
a
property
section
which
is
required
and
that's
delivery
information
that
has
things
like
the
to
address
where
this
message
is
going.
It
has
an
identifier
to
help
with
identifying
message,
duplicates
and
re-transmissions.
B
It's
got
a
number
of
things
in
there,
including
a
reply
to
field
for
use
with
rpc
remote
procedure
called
to
destin
to
tell
the
server
where
to
send
the
reply.
Then
there's
an
applications
property,
that's
metadata
that
can
be
used
for
your
data,
things
like
encryption
signatures
or
time
stamps.
It's
it's
optional,
but
it's
there
to
allow
you
to
have
that
separate
metadata
and
then
there's
data
to
the
top
itself,
which
is
any
sort
of
the
amqp
defined
type
that
I
explained
before.
B
Okay,
so
that's
a
quick
run
through
most
of
the
features
of
the
protocol.
Now
I'd
like
to
kind
of
shift
this
to
you
know
what
does
this
mean?
How
does
this
work?
What
is
what's
the
big
picture
building
especially
large
distributed
message
bus?
B
Okay,
so
how
do
you
deploy
amkp,
especially
when
you
have
something
like
this?
Where
you
have
multiple
sites
distributed
across
a
geographic
domain?
We
talked
about
point-to-point
links.
We
talked
about
tcp
connections
between
consumers
and
producers.
You
can
do
that,
but
it
doesn't
scale
well
and
it's
a
it's
really
a
pain.
If
you
think
about
it,
because
you
have
different
ip
addresses,
you
have
firewalls,
you
have
to
punch
them,
maybe
vpns
it
quickly
gets
complicated.
B
There's
no
need
to
do
that
with
the
amqp
protocol.
It's
actually
no
need
to
do
it
because
the
ap
the
amp
protocol
is,
is
we've
built
tools
and
facilities
to
build
up
infrastructures
to
avoid
that
kind
of
mess
all
right.
So
I
want
to
take
a
moment
and
talk
about
open
project,
open
source
projects
that
build
infrastructure,
components
that
help
you
build
amqp
message,
networks
and,
of
course
I
want
to
start
with
the
activemq
automatic
broker
right.
So
there's
this
broker
that
communicates
amqp1o.
B
So
if
you
have
clients
that
communicate
with
this
broker,
so
in
broker
services,
it's
a
multi-protocol
broker.
It
not
only
speaks
to
mqp1o,
but
it
speaks
other
protocols
like
amqt,
I'm
sorry
mqtt,
so
that's
a
classic
where
people
understand
what
that
is.
But
the
other
thing
is
very
interesting.
Is
the
cupid
dispatch
router?
This
is
kind
of
a
new
thing
in
terms
of
messaging
people
tend
to
think
of
message:
buses
as
being
brokers.
This
is
something
else.
B
B
Internet
protocol
messages
through
a
tcp
connection,
with
an
address
and
destination
you
let
the
routers
figure
out
how
to
get
that
to
the
eventual
consumer.
B
This
does
the
same
thing,
but
at
the
application
level
at
the
message
address
level
all
right
and
it
runs
a
protocol
instead
of
using
ip
addresses.
It's
using
message
addresses
to
determine
the
shortest
path
between
producers
and
consumers,
and
it
also
has
these
delivery
patterns
that
we
talked
about
before
like
multicast
and
implements
multicast
in
in
the
router,
and
I'm
going
to
show
you
examples
of
how
this
is
done,
all
right
and
and
then
the
last
thing
is
the
apache
cupid
has
a
project
called
proton
which
does
client
libraries
right.
B
You
don't
have
to
implement
any
ppp
protocol
yourself.
There
are
client
libraries
provide
many
different
languages
that
are
interoperable.
So,
if
you're
producing
you're
creating
applications,
you
can
just
pull
this
open
source
library
for
your
particular
language,
right
off
the
shelf
and
integrate
it.
B
So
I'm
going
to
start
talking
a
little
bit
about
these
these
infrastructure
components
and
how
you
build
on
them.
Quick,
quick
review
of
of
the
different
messaging
patterns
we've
talked
about.
We
talked
really
about
a
single
producer
consumer
with
direct
connection
right,
send
a
message
receive
the
acknowledgement
back,
that's
straightforward!
B
When
you
have
a
broker
a
broker,
really
isn't
a
transport
mechanism,
it's
a
service.
It's
a
storage
service
right
in
that
capacity
it
actually
breaks
the
communication,
travel
a
path
between
producer
and
consumer.
It's
actually
a
man
in
the
middle.
It
acts
both
as
a
producer
and
a
consumer
when
a
producer
sends
a
message
to
the
broker
that
broker
acknowledges
receiving
that
connection.
That's
all
the
producer
understands.
That's
all
the
notification
producer
gets.
It
does
not
know.
B
If
that
message
is
ever
consumed,
it
may
not
care
like
an
event
system
may
not
care
right
and
that's
a
good
thing.
It's
good
use
of
a
broker.
However,
you
want
to
try
and
do
something
with
synchronous
over
here
where
the
producing
consumer
expected
to
be
online.
At
the
same
time,
like
an
rpc
call,
a
broker
is
just
overhead.
It
doesn't
really
lend
anything
to
the
to
the
efficiency
of
that
kind
of
pattern.
B
So
now
I'm
going
to
introduce.
Having
that
in
mind,
I'm
going
to
introduce
the
whole
point
of
this
dispatch.
Router
this
message
router
the
message
router
serves
as
the
transport
between
producer
and
consumer,
but
it's
transparent
as
far
as
the
protocol
is
concerned,
that
is
the
producer
sends
a
message
over
its
tcp
link,
which
is
actually
terminated
at
the
rubber.
B
B
They
can
actually
be
completely
disparate
tcp
connections,
and
that
allows
us
to
build
this
kind
of
messaging
network
where
each
site
has
these
little
edge,
routers
all
right
all
the
applications.
All
they
need
to
know
as
far
as
tcp
is
concerned,
is
how
to
connect
to
these
headsets.
The
these
edge
drivers,
the
edge
routers,
are
then
connected
to
main
core
routers
right.
You
could
have
any
configuration
you
want
for
these.
B
B
Okay,
now
you
see,
I
also
have
brokers
here,
they're,
not
part
of
the
transport
their
service,
so
they
they
kind
of
hang
off
these
routers
providing
store
and
forward
where
needed.
So
I
mean
what
does
this
work?
How
does
this
work
right?
What
does
it
look
like?
I'm
just
kind
of
show
you
a
a
smaller
example
here
we
have
two
edge
sites
and
we
have
a
cloud
site.
B
B
At
that
point,
they
start
exchanging,
address
information
and
determining
where
things
live.
Now
in
the
left-hand
edge,
we
have
a
service,
a
that's
a
producer.
He
wants
to
produce
to
address,
b
and
c
in
the
middle.
We
have
a
service
on
the
cloud
that
is
a
consumer
for
b.
That's
where
all
b
messages
go
on
the
right
hand,
side,
we
have
a
consumer
for
service
c.
That's
where
all
c
messages
will
go
well
service,
a
doesn't,
really
care
right.
It
connects
to
its
local
edge
and
it
says:
hey
send
these
messages.
B
One
goes
to
b,
one
goes
to
c
and
it's
totally
transparent.
It's
up
to
the
routers
to
handle
handing
off
the
b
message
to
b
and
doing
direct
communications.
If
you
will,
through
the
cloud
to
c
all
right,
so
this
hides
the
complexity
of
the
transport
from
the
application,
and
it
also
provides
inter-cloud
connectivity,
as
in
this
example,
so
going
forward.
Let's
take
a
look
at
a
flow
message
right.
B
The
producer
produces
that
message
to
the
interior,
it's
it's
edge,
router
and
then
the
router,
by
using
a
shortest
path,
algorithm
finds
the
best
path
between
producer
and
consumer.
Okay,
and
that
algorithm
is
that's
dynamic.
If
you
reboot
a
router,
it
doesn't
remember
any
of
it.
It
just
knows
what
tcp
connections
it
needs
to
establish
and
once
they're
established.
It
then
begins
communicating
with
its
peers
all
the
address
routing
information
and
it
learns
it
dynamically.
No
human
intervention
right,
so
you
can
reboot
one
of
these
things
and
it
comes
back
online.
B
So
there's
no
quorum,
there's
no
masters,
there's
no
synchronization
of
that
sort,
going
on
they're,
fairly
independent,
pretty
dumb.
The
amazing
thing
here
is
that,
since
you
don't
have
a
single
point
of
communications,
you
can
actually
have
multiple
parallel
flows
right.
These
are
actual
parallel
flows.
They
don't
consume
bandwidth,
shared
bandwidth,
because
they're
traveling
over
different
tcp
connections,.
B
B
The
router
is
intelligent
to
understand
what
brokers
do
it
has
some
built-in
smarts
there.
So
when
you
configure
the
router
to
connect
to
a
broker,
it
understands
that
the
addresses
on
that
broker,
which
are
the
queue,
addresses
the
q
names,
if
you
will
they're
cues
and
any
producer
that
wants
to
send
to
that
address,
will
send
to
the
queue
the
queue
is.
The
owner
of
that
address.
It's
the
waypoint
right,
this
designated
endpoint
for
producers
and
consumers,
just
the
opposite.
B
Now
we
have
all
the
patterns
like
we
have
a
single
producer,
multiple
consumers
throughout
a
very
large
network,
producing
a
single
message
and,
echoing
that
fanning
that
out,
the
replication
is
done
by
the
routers
right.
So
at
each
point,
in
the
true
graph,
where
you
have
like
say
here
in
the
the
set
for
second
third
hop
in
here,
we
have
three
branches
with
three
copies
of
the
message
are
made
and
sent
out
and
then
further
along
the
right.
We
see
two
more
copies
and
made
so
the
distribution
load
is
spread
where
necessary.
B
B
It
waits
for
the
acknowledgements
for
all
three
entities
before
forwarding
a
final
summary
acknowledgment
back
and
the
summary
acknowledgement.
There's
no
defined
priority
for
them.
Basically,
what
the
rubber
does
is.
It
looks
for
any
hard
errors,
any
rejection,
because
the
message
was
bad
and
it
says
oh,
look
somebody
receive
this
and
reject
it.
You
got
to
know
about
that,
so
the
other
ones
accept
it.
The
rejection
is
more
important.
B
Otherwise,
if
they
come
back
as
a
few
accepting
it
and
maybe
a
few
of
them
saying
they
can't
deal
with
it
now,
the
delivery
has
to
be
summarized
somehow.
So
it's
said
as
accepted.
So
it
doesn't
tell
you
because
there's
no
counting,
it
doesn't
tell
you
how
many
consumed
it,
but
it
tells
you
whether
or
not
something
was
delivered
and
it
also
provides
backflow.
So
the
producer
is
gated
by
the
rate
of
all
those
consumers,
so
it
doesn't
just
keep
hammering
fan
outs
messages.
B
So
that's
interesting,
but
what's
even
more
interesting
is
fan
in
right
if
you
think
about
edge
computing.
This
is
really
what
you
want
to
want
to
deal
with
right.
You
have
a
core
data
center
that
connects
to
a
couple
regional
sites
and
then
further
connected
to
edge
sites
where
the
producers
are
located.
Now
you
have
a
large
number
of
edge
applications
generating
tons
of
real-time
data
here
right,
say,
machine
vision
or
something
like
that
right.
B
You
want
to
locally
summarize
that
and
process
it,
because
latency
is
important,
for
example
right,
so
the
messages
are
sent
to
the
edge
and
the
edge
can
route
locally
right
doesn't
have
to
send
anything
to
a
core.
It
realizes
that
there's
an
intermediate
intermediary
processing
step
right
on
the
edge
side
itself,
where
all
those
messages
go
for
processing
and
summarization.
B
B
B
Lastly,
high
availability
right:
it's
always
important.
You've
got
a
system
here
where
links
are
going
down.
Connection's
going
and
routers
are
rebooting.
I
already
talked
about
the
router
when
it
reboots
you
don't
have
to
intervene.
It'll
bring
up
the
connections
that
it
has
when
it
from
its
internal
configuration
and
then
learn
what's
going
on
dynamically
right,
you
don't
have
to
tell
it
all
these
routes
are
here.
These
are
there.
It
will
learn
from
the
network
from
its
peers,
so
keeping
things
simple
and
stupid
are
good.
B
However,
what
happens
if
a
connection
fails
just
like
an
ip
router?
You
run
around
it.
So
let's
say
this
producer
and
consumer.
Have
this
optimal
path
through
the
network
for
messages
should
that
connection
drop,
the
routers
will
detect
it.
They
will
reconfigure
themselves
and
they
will
reroute
around
it.
No
human
intervention
messages
might
get
lost
right.
B
That's
really
all
I
have.
The
point
of
this
slide
is
to
kind
of
hammer
home
the
the
ideas
that
this
this
universal
open
protocol
has
a
lot
of
universal
open
parts
with
it
that
are,
are
shared
and
easily
leveraged
and
there's
a
number
of
products
out
here
that
have
support
in
one
way
or
the
other
for
amqp
messaging.
B
You
have
plugins
or
integrations,
and
this
is
real
stuff.
This
is
being
used
today,
it's
being
deployed
in
the
field.
D
Yeah
I
have
one
oh
by
the
way,
let
me
introduce
myself
so
I'm
frederick,
I'm
the
new
program
manager
for
edge
and
iot
at
the
eclipse
foundation
nice
to
meet
you.
It
was
certainly
an
interesting
presentation,
so
I
was
wondering
it
seems.
I
was
really
not
familiar
with
mqp
before
this
presentation.
D
It
seems
as
a
protocol
to
be
a
bit
more
generic
than
than
in
qtt
in
the
sense
that
it
supports
a
wider
assortment
of
of
topologies
or
use
cases.
But
apart
from
that,
how
would
you
characterize
the
difference
between
the
two.
B
Well,
I
think
what
we
look
at
when
we
look
at
amqp
is
something
that
is
essentially
a
full
featured
enterprise
messaging
system.
B
Having
said
that,
a
lot
of
these
features
don't
have
to
be
used,
for
example,
acknowledgements
if
you're
producing
messages
that
are
idempotent,
say
you're,
producing
temperature
as
the
current
temperature
temperature
and
each
message
that
goes
out
supersedes
the
last
one.
You
may
want
to
send
those
as
best
effort
and
not
require
an
acknowledgement
back
right,
so
that
makes
things
a
little
simpler
and
there's
a
very
number
of
things
like
this.
Like
flow
control,
you
can
just
you
know,
hard
code,
a
number
of
10
and
just
keep
updating
it
as
you
go.
B
I
don't
know
all
that
much
about
mqtt.
I
know
it
is
very
restricted
profile
and
is
a
resource
specifically
for
resource
constraints.
If
you
are
careful,
I
think
you
can
achieve
a
lot
of
that.
With
am
q
p,
I'm
not
aware
of
a
embedded
implementation
for
client
library.
That
would
be
a
nice
thing.
I
know
there's
c
library,
but
I
think
that
relies
on
things
like
a
posix
kind
of
inter
environment,
but
you
know
with
flow
control
and
and
limiting
message
sizes.
I
didn't
really
get
into
that.
B
There's
negotiation
between
sender
and
receiver,
link,
negotiation,
connection,
notice,
negotiation.
That
says
that
you
know
I
only
support
frame
sizes
of
512,
bytes
or
whatever
you
want
to
configure
and
that's
honored.
So
you
can
use
small
bit
data
packets
and
the
encoding
is
very
compact,
so
you
can
do
that
too.
So
I
think
there's
ways
of
deploying
this
that
will
really
minimize
its
footprint
and
yet
it
has
its
larger
enterprise
features
that
could
be
used
all
right.
A
And
I
think
one
more
thing
that
distinguished
amqp
from
from
other
messaging
protocols
is
this
symmetricity
that
andy
talked
about.
So
it's
not
a
client
broker
protocol
like
mqtt
or
or
even
earlier
versions
of
of
amqp
protocol.
So
you
can
build
different
communication
patterns
like
direct
rpc
between
two
endpoints,
but
you
can
also
involve
routing.
You
can
involve
brokering
in
a
similar
way
as
mqtt,
so
it
it
adds
a
complexity,
but
it
it
adds
a
lot
of
flexibility
in
how
services
can
communicate.
C
I
I
I'm
I'm
very
curious
about
the
route
routine
part.
Oh
okay,
you
mentioned
like
without
vpn
the
sites
can
communicate
through
the
router.
So
I'm
curious.
Can
you
share
some
details
like
how
can
how
can
that
happen,
especially
if
the
edge
and
all
the
sides
are
behind
its
own?
Like
firewall
settings,
okay,.
B
Fairly
straightforward,
in
that
these
are
these:
these
ip
addresses
that
are
used
just
a
set
of
tcp
connections.
They
would
have
to
be
punched
through
a
firewall.
B
B
As
long
as
the
routers
on
the
edge
have
a
tcp
connectivity
to
the
service
on
the
center,
the
ip
domains,
the
the
sub
subnets,
if
you
will,
they
can
overlap
right.
You
could
have
a
a
a
a
10
network
over
on
the
left
edge
and
the
same
10
network
address
over
on
the
the
right
edge.
B
C
B
Yes,
they
can
actually
they're
running
a
like
a
shortest
path,
algorithm
for
routing
they
do
communicate
with
each
other
using
their
own
dedicated
links
and
channels
that
are
dynamically
established
once
the
connections
are
up
with
dynamic
with
with
internal
addresses.
Remember
application
addresses,
so
it's
really
running
aimqp
to
communicate
routing
information.
B
C
B
Yeah,
actually,
there's
a
good
point
to
be
made
there
in
a
routing
network
like
this.
These
routers
all
know
about
each
other
right
during
the
writing
protocol.
They
know
their
neighbors
and
they
know
what
all
the
routers
are
out
there.
And
the
interesting
thing
is
that
if
you
have
a
management
console
which
the
cupid
dispatch
router
has
a
graphical
web-based
management
console,
if
you
have
authentication-
and
you
can
connect
to
any
of
these
routers,
you
will
actually
be
able
to
manage
all
of
them,
because
the
management
protocol,
the
routers,
is
amqp.
B
B
Oh
yes,
yeah,
it's
all
at
the
apache
site.
I
have
some
links
at
the
end
of
this
presentation.
B
For
follow-up
stuff,
if
you're
interested
yes,
there
are
there's
an
open
source
implementation,
the
rotors
apache
2
and
it's
packaged
for
debian.
It's
packaged
for
from
you,
know:
red
hat
rail
stuff
and
it's
you
know,
source
tarbar.
Well,
you
can
get
it
that
way.
B
E
Is
that
answer
request
yeah
so
basically
like
like
this
router
exposes
certain
api
endpoints
or
are
they
api-based
or
totally
binary-based?.
B
Well,
the
the
endpoints
are
the
the
the
libraries
for
your
application.
Your
application
would
use
one
of
these
libraries,
perhaps
from
the
proton
project,
say:
you're,
writing
a
ruby
or
go
application.
There's
a
go
library
for
making
communication
for
communicating
over
amqp
network.
It
has
an
api
for
creating
a
connection.
It
has
a
great
api
for
creating
a
receiver
for
an
address
and
a
sender
to
an
address.
Is
it
then
the
ability
to
send
messages
and
callbacks
for
receiving
messages?
So
if
you
were
to
build
an
application,
you
would
start
there
right.
B
You
would
pull
that
api
in
and
use
that
and
the
api
would.
For
example,
when
you
send
a
connection,
it
would
take
an
ip
or
domain
name
port
right
and
it
would
set
the
connection
on
behalf
and
then
you
have
a
connection
object
depending
on
the
language
of
course,
and
that
connection
object.
You
can
use
it
to
create
receivers
and
senders
off
of
that
and
then
send
and
receive
through
the
receiver
objects
and
you
can
send
to
objects.
E
Oh
yeah,
of
course,
so
once
once
I
go
deep
down
on
that,
like
understanding
the
different
components,
so
definitely
do
better
clarity.
B
Yeah,
if
you,
if
you
go
to
that
cupid
project,
that
link
is
on
the
screen.
There's
a
couple
places
to
look
the
cupid
project.
E
Yeah,
like
after
visiting
site
like
I
see
a
lot
of
the
links,
and
they
are
pretty
image
descriptive
thanks
thanks
for
bringing
this.
F
Hey
one
more
question
great
talk
by
the
way.
Thank
you
for
that
really
cleared
some
stuff
up.
I
have
a
question
regarding
pot
communication,
so
if
I
run
to,
for
example,
services
in
the
on
the
pot,
how
does
it
or-
and
it's
on
the
same
machine
basically
does
act?
F
B
Well
somewhat,
I
know
the
router
has
a
http
service,
so
there's
been
work,
although
it's
not
really
quite
finalized,
yet
for
for
eight
http
mapping
through
the
router
into
amqp
and
back
and
forth
right,
but
typically
the
router's
connection.
When
you,
when
you
configure
the
router,
you
configure
a
listening
port
right
for
applications
to
connect
in
and
that
port
is,
you
know
typically
just
speaks
amqp,
but
it
can
also
be
designated
to
speak
either
npp
or
http.
B
You
bring
up
a
very
interesting
point.
Something
I
totally
glossed
over
is
that
the
router
has
all
sorts
of
different
delivery
patterns
based
on
addresses.
So
in
your
case,
you
have
maybe
multiple
services
that
are
overlapping
on
a
pod.
The
router
can
load
balance
between
those
based
on
the
address.
If
you
tell
it
to
if
you
say,
I
have
a
certain
address,
that's
a
work
queue
and
I
want
to
have
several
servers
of
that
out
there.
You
can
load
balance
across
it.
B
It
also
has
cost
mechanisms
where,
if
one
service
gets
backed
up
due
to
credit
right
exhaust
certificate
exhaustion
credit,
it
will
route
the
request
to
a
parallel
server,
different
server.
That
has
less
work
on
it
and
I'm
just
kind
of
scratching
the
surface
here,
but
there
are
different
patterns
to
allow
load,
balancing
and
and
and
and
locality
right.
You
could
have
a
server,
that's
locally,
a
service,
that's
local!
On
your
pod.
You
can
have
a
service
that's
reached
through
the
network
on
a
total
different
cloud
that
are
the
same
services.
B
F
Yeah,
the
question
comes
a
little
bit
from
from
my
view,
because
I'm
trying
to
to
use
or
to
analyze
video
data-
and
we
are
currently
using
about
15
000
frames
per
second
from
one
camera
device,
and
the
question
is
if,
if
I
run
it
from
one
service,
for
example
like
a
camera
reader
or
something
like
that
into
another
service,
I
don't
want
to
use
a
network
connection
for
that.
F
B
This
is
kind
of
like
maybe
streaming
through
us
you're,
not
streaming
with
a
socket.
Are
you.
F
Not
yet
I'm
quite
sure
I
would
love
to
to
just
hook
into
the
stream,
but
right
now
I'm
just
doing
ipc
calls
so.
G
Yeah
moritz,
this
is
actually
something
that
I'd
be.
I
would
love
to
talk
to
you
about
it,
I
think,
probably
outside
the
context
of
the
meeting.
Just
so
we
can
go
deep,
but
this
is
something
that
I've
handled
many
times
with
sharing
data
between
edge
microservices
without
using
network
and
then
using
it
when
necessary,
to
publish
the
results.
This
is
this
is
so
on
point
to
what
I
I
see
day
to
day,
I'd
be
happy
to
help
you
out.
G
G
A
Whatever
you
guys
want,
I
mean
it
took
a
little
bit
longer
than
we
expected,
but
I
mean
we
can
continue
this
conversation
offline.
So
if
you
have
more
questions,
I
think
you
know
ken
and
I
and
everybody
will
be
more
than
happy
to
answer
it
on
the
slack
or
or
or
whatever
just
reach
us.
So,
okay,
I'm
sorry.
G
Yeah
that'll
be
fine,
it'll
be
fine.
I
just
want
to
introduce
my
thing
and
start
some
trouble.
I
will
I
actually
want
to
start
by
saying.
Can
you
you
if
I
had
an
annual
award
for
the
best
for
the
best
presentation
in
edge
computing
regarding
technology,
you
would
have
just
won
it.
No,
I
am.
I
am
such
a
fan
of
amqp,
but
the
thing
is
it's
funny:
it's
timely.
G
We
have
for
eclipse
io
fog,
we
we
just
finished
switching
over
to
artemis
mq
because
of
all
of
these
reasons,
and
it's
not
out
yet,
but
as
soon
as
it
comes
out,
it
basically
is
going
to
be
saying
the
exact
same
things
that
you
just
said
about
how
how
edge
infrastructure
should
be
running.
I'm
if
you
don't
mind,
I
might
reach
out
to
you
to
ask
for
a
little
bit
of
architectural
guidance
on
death,
implementations.
B
G
You
I
can
hook
you
up
with
people
who
can't
help
you
excellent
yeah.
We
used
to
have
this
hodgepodge
thing
called.
I
o
message.
That
was
our
version
of
handling
a
lot
of
this
stuff,
the
years
back,
and
we
kept
it
in
place
until
recently
when
people
have
been
asking
for
you
know
the
kind
of
self-optimization
of
pathways
and
stuff,
and
so
anyway,
I'm
super
excited
to
be
doing
the
launch
of
that,
but
that's
not
out
for
another
month
or
so
yet.
G
So,
thank
you
very
much
and
I'm
glad
this
was
recorded,
because
this
is
this
is
seriously
like
my
favorite
tech
talk
on
this
this
stuff
in
in
I
don't
know
how
long
so
awesome
cool.
Thank
you.
Thank
you.
G
So
much
yeah
great,
so
everybody
I'll
share
my
screen
and
look
at
the
the
the
security
doc
that
I
so
I
wrote
this
document
last
night
so
that
I
could
finally
kind
of
put
my
money
where
my
mouth
was
in
terms
of
writing
this
thing
up
and
getting
it
into
the
working
group.
Can
everybody
see
the
edge
security
challenges?
Google,
doc
yep,
okay,
so
yeah?
G
So
I
I
put
this
together,
as
I
said
last
night,
so
it's
not
it's
not
that
long
and
it
doesn't
have
that
much
in
it
yet,
but
the
whole
purpose
and
I
left
the
little
dot
dot
dot
after
the
you
know,
I'm
expecting
fully
to
have
a
whole
bunch
of
primary
authors
on
this.
G
G
What
this
thing
has
in
it
is-
and
I
I
you
know,
encourage
you
to
read
it
and
start
dropping
in
comments.
Please
use
comments
for
like
major
content
changes
so
that
we
don't
end
up
losing
value,
but
if
you're
going
to
add
anything,
I
think
you
can
go
ahead
and
insert
it
and
then
everyone
will
be
reviewing
and
reading.
G
But
what
this
thing
has
in
it
is
a
bunch
of
sections
on
the
different
types
of
edge
challenges
for
security,
and
so,
as
an
example
talk
about
trusting
hardware-
and
you
know
just
we
all
know
that-
there's
stuff
out
there-
that's
that's
already
good,
secure
boot,
hardware,
route
of
trust,
etc.
This
is
like
a
discussion
around
what
challenges
still
remain.
We
all
know
that
attached
devices
are
a
huge
problem
right,
it's
a!
How
do
I
even
know
that
the
sensor
is
the
right
sensor,
and
so
on
these
things
come
into
discussion
in
this
document.
G
It's
meant
to
be
a
living
document
until
we
decide
that
it's
it's
complete
and
want
to
release
it
to
the
public
and
yeah.
That's
that's
the
gist
of
what
this
thing
is
and
why
it's
here
and
I
want
you
all
to
you-
know-
to
look
at
it,
give
feedback
and
so
on.
Hopefully
it
provides
some.
You
know
some
help
for
folks,
there's,
like
maybe
20
30
more
stuff,
that
I
didn't
have
time
to
drop
in.
G
G
Cool
and,
of
course,
frederick,
it's
great
to
see
you
on
after
we've
been
on
two
different
meetings
this
morning
this
morning.
My
time,
I
think
it's
a
little
bit
later
for
you
yeah
will,
of
course,
from
the
eclipse
edge
working
group.
We
will
be
focusing
on
implementation
of
solutions
to
these,
and
I
think
it's
pretty
safe
to
say
that
eclipse
edge
working
group
would
be
happy
to
share
implementation
and
solution
architecture
stuff
with
the
with
this
kubernetes
working
group.
So
it's
you
know
it's!
G
It's
not
a
one-way
street
by
any
means-
and
you
know
all
stuff
will
circle
around,
but
yeah.
So
that's
where
we're
focusing
on
implementations.
More
and-
and
here
I
see
this,
as
you
know,
this
kubernetes
iot
edge
working
group
is
a
great
place
to
talk
about
the
demand
from
industry
and
a
great
place.
A
great
way
to
do.
That
is
to
talk
about
what
is
everyone's
security
concerns
and
that's
a
good
starting
paper
before
we
start
to
solve.
A
So
kilton,
maybe
we
can
put
put
it
on
the
agenda
for
the
next
meeting
to
go
into
the
you
know
more
details.
G
Oh
sure
you
know
by
then
we'll
also
have
commentary
from
you
know,
I'm
happy
today
on
and
that
one's
gonna
be
the
the
flip
the
flip
time
right.
So
the
what
will
be
the
evening
session
for
me,
no
problem
I'll
put
it
on
my
calendar
that
that
you
want
me
to
be
going
into
deeper
detail.
Then.
A
G
The
next
two
weeks
sounds
good
and
if
it's
helpful,
if
people
can't
make
it
to
the
the
different
time
zone,
oriented
meetings,
I'm
happy
to
present
in
two
different
ones
or
the
recording
right
is
a
good
way
to
distribute
that
right.
Yeah,
okay,
great
cool,
all
right
I'll,
stop
share,
and
if
anybody
has
trouble
accessing
it
or
whatever,
I
think
the
the
slack
channel
has
been
where
we've
been
kind
of
opening
up
stuff,
like
that.
Thank
you,
dan
for
opening
access
for
me
to
drop
this
in
no
worries,
no
worries,
cool.