►
From YouTube: Kubernetes WG IoT Edge 20190717
Description
July 17 meeting of the Kubernetes IoT Edge working group - discussion of LF Edge glossary, presentation on an MQTT based edge broker concept
A
So
he
hit
the
recording
and
now
officially
we
can
start
so
as
as
for
every
meeting.
If
everybody
that's
new,
for
the
group
wants
to
introduce
state
their
interest
in
the
group
and
andy
and
in
the
area
this
is.
This
is
the
perfect
time
to
do
so
so.
A
I
can
go
ahead.
John,
hey,
hey!
I
guess,
consider
you
a
a
new
folk
but
definitely
appreciated
your
presence.
B
Yeah
I
have
not
attended
very
often
so
for
the
rest
of
the
fox
on
the
call
my
name
is
jeremier.
I
have
been
a
driver
for
over
seven
years
now,
so
I'm
part
of
the
red
hat
company-
and
I
am
a
product
manager
for
edge
computing
at
red
hat.
So
my
main
interest
is
to
learn
what
kind
of
use
cases
you
guys
are
seeing
what
kind
of
needs
you
have
from
kubernetes
and
try
to
contribute.
Also
from
our
perspective.
B
Have
you
more
here,
I
hope
so.
I
hope
that
now
I'll
be
more
free
and
it
will
make
make
me
more
contribute
here.
C
C
D
Okay,
so
we've
had
some
dynamic
agenda,
adding
which
is
great.
E
Yeah
sure
sure
that's
great
so
for
background
when
we
publish
well.
So
let
me
tell
you
about
the
state
of
the
edge
first.
The
state
of
the
edge
is
a
group
that
I
co-founded,
that
is
sponsored
by
vendors,
but
it's
vendor
neutral
and
its
mission
is
to
publish
vendor
neutral
research
on
edge
computing,
and
we
published
our
first
report
last
year.
In
june
it
was
the
state
of
the
edge
of
2018.
E
mid-august
we're
going
to
publish
our
2019
version.
There
will
be
two
substantial
upgrades
to
the
previous
state
of
the
age
report.
One
of
them
is
we're
doing
a
pretty
extensive
forecast
model.
E
So
we're
going
to
try
to
do
a
real
one
and
give
away
for
free
under
a
creative
commons
license,
but
we're
also
doing
a
section
that
we're
calling
postcards
from
the
edge
and
we're
inviting
third
parties
to
contribute
point
of
view
essays
similar
to
what
chris
ancheck
did
last
year
representing
the
cncf.
E
So
starting
there,
I
would
like
to
invite
this
working
group
to
contribute
an
essay.
The
way
it
would
work
is
you
know
we
should?
We
should
agree
on
a
point
of
view
and
we
have
some
guidelines
for
how
it's
written,
but
neither
of
those
should
be
any
trouble
and
we
need
like
a
250
word.
E
Abstract
that'll
actually
be
published
in
the
pdf
of
the
state
of
the
edge
and
then
a
thousand
word
approximately
doesn't
have
to
be
that
you
know
more
or
less
essay
that'll
be
part
of
the
supplemental
online
material
and
will
be
promoted
separately
as
well.
E
So
if
the
working
group
has
a
topic
that
they're
passionate
about
that
they'd
like
to
get
out
in
front
of
there,
that
you
know
represents
a
point
of
view
coming
out
of
this
group,
I'm
happy
to
help
shepherd
that
through
that
process,
it's
completely
optional,
but
I
think
it
would
be
good
to
express
this
group's
viewpoint
and
also
promote
this
group's
work.
So
that's
the
stay
of
the
edge
and
I
would
need
to
get
that
essay
sometime.
You
know
before
the
end
of
july.
E
Ideally,
the
second
thing
is
the
append
we
built
an
appendix
in
the
state
of
the
edge
last
year
called
the
open
glossary
of
edge
computing,
and
our
goal
was
to
gather
empirically
and
sift
through
all
the
different
definitions
in
and
around
edge
and
come
up
with
some
opinions
and
points
of
view
and
canonicalization,
and
you
know
disambiguation
and
all
of
that,
and
obviously
something
like
that
is-
is
always
a
work
in
progress,
and
so
what
we
did
is
we
took
it
out
of
the
state
of
edge
organization
and
we
contributed
to
the
linux
foundation.
E
You
know
all
the
definitions
sit
in
a
github
repo
people
do
pull
requests
all
the
time
and
it's
one
of
the
founding
projects
of
lf
edge
this
year
this
month,
actually
we're
going
to
rev
it
to
v,
2.0
and
august
4th
is
when
we're
tentatively
going
to
pull
the
switch
on
that
and
we
are
working
with
the
governing
board
of
lf
edge
to
push
the
sort
of
cannibalization
across
all
of
the
yellow
edge
projects,
and
you
know
we're
using
a
variety
of
different
mechanisms
to
do
that.
E
You
know
most
of
them
following
sort
of
the
principles
of
open
source.
You
know
trying
to
encourage
people
to
do
it
based
on
you
know,
knowledge
and
meritocracy,
but
we
may
in
fact
make
that
a
qualification
for
projects
to
reach
a
certain
stage
as
to
is
to
demonstrate
conformance
to
the
lexicon
and,
of
course,
if
people
disagree,
that's
the
whole
point
of
an
open
source
project
there's
a
mechanism
by
which
you
can
do
it.
So
with
that
background,
I
welcome
contributions
from
this
group
to
the
glossary.
E
And
I
would
like
to
encourage
this
group
to
consider
adopting
it
as
appropriate.
You
know,
as
you
write,
white
papers
and
stuff
and
again
it
would
be
a
two-way
street.
So
if
there's
terms
or
phrases
that
you
feel
need
to
be
defined
differently
or
need
to
be
defined
at
all,
we're
happy
to
incorporate
that,
and
you
know,
vice
versa,
we're
happy
to
get
feedback
on
the
language
of
the
use,
but
I'd
love
to
see.
E
Certainly
within
the
linux
foundation,
family
everybody's
talking
about
edge
talking
about
in
the
same
way
using
the
same
terms,
because
when
we
don't
it
just
creates
chaos,
and
you
know
it
lets
sort
of
vendors
and
pundits
dominate
the
language
and
I
think,
slows
down
the
industry's
progress.
E
Let's
see,
if
there's
anything
else
is
going
to
add
to
that
yeah,
I
think
that's
that's!
Essentially.
I
just
want
to
share
the
plans
on
that
and
what
we're
doing
over
at
lf
edge
and
invite
that
essay
and
the
contributions
to
the
glossary.
If
you
have
appetite
for.
A
That
yeah,
that's
great,
I
think
the
the
only
issue
is.
Can
you
find
resources
to
do
it
in
time?
But
let's
try
to
organize.
E
Yeah
yeah,
but
we
can,
I
mean,
I
think
just
starting.
The
dialogue
in
I
think,
would
be
ideal
if
there's
somebody
in
this
group,
where
they're
on
this
call
or
just
in
the
group,
larger
that
wants
to
that's
willing
to
be
a
champion
for
the
coordination.
E
That's
probably
the
best
first
step,
because
then
you
know,
as
as
this
group
goes
to
look
to
publish
a
white
paper.
I
can
help
channel
resources,
review
the
white
paper
and
identify
anything
that
we
think
you
know.
Maybe
the
glossary
should
benefit
from
or
contribute
to,
and
also
just
a
way
to
conduit,
create
a
relationship
between
between
the
different
organizations
that
I'm
a
part
of.
D
E
A
E
That's
all
appropriate
or
if
you
wanted
to
excerpt
from
the
security
white
paper,
if
you
think
the
timing
is
going
to
work,
you
know
I
certainly
would
be
happy
to
link
to
it
and
you
know
use
that
as
a
help.
You
use
that
as
a
promotion
mechanism.
A
D
A
D
The
general
value
of
like
a
broker
in
general
is
not
that
appreciated
for
sort
of
service
to
service
communication.
D
I
don't
think
it's
it's
nothing
new
for
people
who
know
brokers,
but
for
some
reason
I
think
the
the
the
ecosystem
just
seems
to
be
more.
Around
kind
of
you
know:
istio
style
service
to
service
type
communication,
but
once
you
are
running
an
mqtd
broker
of
any
kind
in
in
a
cluster,
you
have
this
great
opportunity
on
the
edge
to
talk
between
between
devices
locally
between
applications
within
the
cluster
between
applications
and
local
devices.
Right,
it's
just
a
running
completely
sort
of
offline,
just
as
a
land-based
broker,
is
a
useful
thing
unto
itself.
D
What
I
wanted
to
do
was
then
take
the
mosquito
broker
and
use
the
feature
it
has
for
bridging,
so
that
essentially
a
subset
of
the
namespace
routes
to
our
cloud
iot
service,
and
this
the
syntax
for
this
is
not
super
well
documented.
So
it
took
me
quite
a
bit
of
trial
and
error
to
kind
of
come
up
with
the
right
syntax.
So
the
idea
is
that
these
local
broker
name
spaces,
which
I
prefix
all
of
them,
with
jcp
route
to
our
device,
specific
topics
on
our
managed
cloud
service.
D
D
If
you
post
to
this,
it
gets
relayed
to
this
cloud
topic
and
you
set
up
the
the
bridge
communication
by
setting
up
the
host
now,
one
of
the
things
that
is
particular
to
kind
of
the
kubernetes
and
our
cloud
service
kind
of
intersection
here
is
that
our
our
password
used
remotely
is
something
that
needs
to
rotate,
and
so
you
can.
D
D
The
standard
mosquito
container
so
totally
unmodified
not
custom
as
pulled
from
kind
of
docker
hub,
but
the
other
is
my
custom
refresher
and
then
they
share
a
volume
where
the
mosquito
configuration
is
and
the
other
kind
of
interesting
thing.
I
I
figured
out
how
to
do
in
here.
This
is
just
kind
of
kubernetes.
D
You
know
deep,
deep
tech
of
kubernetes
for
fun
and
and
profit
here,
but
in
here
I
also
set
up
the
capabilities
to
allow
the
shared
process
namespace
so
that
I
can
sig
hop
the
mosquito
process
from
my
refresher
container
after
rewriting
the
configuration.
D
D
It
uses
customize
to
take
some
of
the
you
know,
project
specific
overlays
and
patch,
that
into
the
standard
deployment
enamel,
as
well
as
generate
the
config
map
with
the
device
private
key.
So
that's
an
overlay
for
those
who
haven't
used
customize
it.
It
works
on
kind
of
base
layers
and
then
modification.
D
So
it
can
only
open
one
mqtt
connection
for
a
for
a
given
device,
and
that
means
that
if
you
want
to
actually
have
other
local
devices
or
even
different
applications
and
in
kubernetes
all
interested
in
subscribing
to
a
certain
event
from
the
cloud,
they
can't
create
independent
mqtt
connections
and
independent
subscriptions,
whereas
they
can
make
independent
local
subscriptions
to
the
broker
and
because
it's
bridged,
when
this
single
kind
of
say
command
event
arrives,
you
can
have
multiple
kind
of
listeners
or
subscribers
locally.
D
D
I
I
I'll
stop
here
before
stopping
sharing
just
to
see,
if
there's
any
questions
that
would
be
worth
me
having
either
of
these
tabs
open
according
to
quickly
show
you
but
there's
no
questions
of
that
nature.
I
can
stop
sharing
any
questions.
A
I'm
just
wondering
is
there,
is
there
any
benefit
of
using
the
the
broker
there
or
you
just
need
the
bridge
and
and
the
the
thing
is
that
you
you
you're
using
the
the
mosquito
broker?
Is
that
it
just
has
that
functionality,
but
right.
D
D
Even
even
pod
to
pod
right,
because
the
other
nice
thing
is,
you
can
expose
that
broker
using
cluster
local
dns.
So
all
these
local
clients
need
to
do.
D
Let
me
see
if
I
have
that
as
a
quick
example,
the
host
name
in
kubernetes
is
just
mqtt
bridge
right
yeah,
so
your
your
mqttd
clients
in
these
applications
can
just
use
a
nice
simple
to
refer
to
hostname,
because
it's
a
kubernetes
service.
D
So
that's
like,
I
think,
there's
some
there's
some
utility
that
I'm
calling
out,
which
is
that,
even
if
you
don't
bridge
this
just
having
a
broken
kubernetes
is
not
a
thing
that
that
you
see
a
lot
of,
I
think,
just
because
the
tendency
is
to
go
with
services
as
as
the
pattern,
the
the
thing
that
I'm
going
to
then
build
next
on
kind
of
top
of
this
is
to
use
this
broker
as
a
k
native
event
source.
D
So
then,
one
of
these
applications,
instead
of
using
a
native
mqtt
client
it
can
use
you
know
the
k-native
eventing
construct
to
subscribe
to
a
certain.
You
know,
subscription
type
for
mqtt,
that's
sort
of
a
it's
gonna.
It's
a
whole
lot
of
machinery
to
basically,
you
know,
create
one
message
type
to
another
message
type,
but
but
I
think
that
way:
it'll
it'll,
let
people
who
are
totally
unfamiliar
with
mqtt
who
are
coming
at
it
from
a
k-native.
D
A
A
Basically
doing
this
similar
kind
of
thing,
I
think
but
yeah,
maybe
maybe
I
hope
for
the
next
meeting
we
can.
We
can
give
more
presentation
presentation
on
on
that
front
as
well,
but
yeah.
I
think
it's.
D
A
Well,
I
I
I
was
wasn't
thinking
about
the
k-18,
but
yeah
definitely
there's
there's
a
group
of
people
were
working
on
all
kind
of
bridging
bridging,
but
what
I
meant
is
is
more
like
these
direct
proxies
for
different
kind
of
traffic
for
http
or
or
mqtt
over
to
over
overlay
mqp
network
and
and
and
bridging
the
services
between
the
the
clusters
and
and
things
like
that.
So
similar
thing
that
that
we
already
presented
here
in
the
concept
stage.
So
you
know
trying
to
make
that
more
consumable
right.
A
Yeah,
but
so
don't
think
about
the
broker
topic
you
can
think
about
the
the
you
know,
general
aim
qp
addressing
which
can
be
sure
that
directly
services,
or
or
or
whatever
yeah
that's
cool,
yeah,
so
yeah,
maybe
yeah,
there's
a
way
to
to
do
something
in
common,
because
I
think
it's
a
it
started
targeting
basically
the
same
a
similar
use
case.
You
know.
A
D
To
standardize
the
cloud
events
and
yeah
yeah,
you
know
it's,
but
it's
harder
than
it
should
be
to
create
an
event
source.
A
Okay,
I
try
do
but
yeah.
I
believe.
D
D
D
Stuff
so,
but
maybe
maybe
you
know,
share
in
the
future
future
meeting,
if
you
get
get
things
to
a
milestone,
anyone
have
any
any
questions
that
they
have
around
the
use
of
kubernetes
or
use
of
kubernetes
on
the
edge
that
they
are
are
seeking
kind
of
an
answer
of
or
or
sort
of
guidance
around
the
best
practice
of
anything.
F
D
I
think
it
comes
up
here.
I
definitely
think
that
the
the
flavor
of
multi-cluster
that
I
call
many
many
small
clusters
versus
a
few
large
clusters
right,
I
mean,
I
think,
a
lot
of
a
lot
of
typical
multi-cluster
kubernetes
interested
parties
come
to
multi-cluster
from
the
context
of
I'm
running
five
globally
distributed
large
clusters,
and
I
need
to
do
a
global
rollout
of
software
across
my
five.
D
You
know
three
to
five
big
clusters,
whereas
you
know
edge
iit,
like
we
often
talk
about
the
chick-fil-a
case,
they've
got
2
000,
tiny
clusters,
and
so
the
challenges
around
multi-cluster
are
tuned
or
tweaked,
but
largely
common.
So
we
don't
we
don't.
We
don't
in
the
working
group,
try
to
usurp
any
of
the
multi-cluster
sigs
sort
of
charter,
but
we're
looking
at
and
interested
in
if
people
have
thoughts
around
kind
of
the.
Where
does
this
need
to
be
different
for
edge
or
where
does
edge,
add
additional
kind
of
consideration?
F
D
I
think
the
the
fewer
large
clusters
in
general
I
mean
you're
gonna,
find
that
that
most
of
the
kubernetes
ecosystem
is
looking
at
running
kubernetes
in
non-edge
environments.
D
So
I
think
I
think,
a
lot
of
the
fundamental
communication
patterns
like
like
once
you
are
pushing
something
to
more
than
one
cluster
as
soon
as
your
two
clusters
in
your
multi-cluster,
the
some
of
the
some
of
the
systems,
they'll
they'll
just
work,
but
anything
that
has
you
know,
especially,
I
think
some
of
the
uis
are
are
kind
of
problematic.
B
From
my
perspective,
I
would
be
interested
if,
since
you're
asking,
you
probably
have
a
use
case
when
you
need
a
federation
for
remote
edge
locations,
and
if
we
are
talking
about
these
hundreds
and
thousands
of
locations,
what
kind
of
federation
would
you
be
expecting
there.
F
Yeah,
I
don't
have
details
of
any
kind
of
specific
use
case.
I
think
it's
kind
of
like
I
actually
work
in
the
openstack
sterling
x
group
and
it's
a
kind
of
integration
type
project
within
openstack
and
and
starting
x
is
targeted.
F
It
positions
itself
as
targeted
for
the
edge,
but
one
of
the
one
feature
within
it
is
kind
of
a
distributed
cloud
type
feature
for
open
stack,
which
is
like
at
a
very
high
level
analogous
to
the
kubernetes
federation,
but
but
so
far
so
far
the
customers
like,
like
we
also
have
a
commercial
product,
that's
based
on
openstack
styling
x
and
the
customers
that
are
like
evaluating
it.
F
It
ranges
like
there's
some
that
are
looking
for
yeah
just
a
know,
a
small
handful
of
kind
of
edge
clouds
and,
and
then
there's
some
that
are
talking,
like
thousands.
F
We
did
a
lot
of
kind
of
orchestration
capabilities
kind
of
on
our
own,
but
but
now
that
we're
running
kubernetes
now
at
potentially
you
know
many
clouds.
We
wanted
to
kind
of
leverage
kind
of
the
orchestration
stuff.
That's
been
done
in
the
kubernetes
federation
degree.
B
Right
yeah,
I'm
I'm
coming
from
the
openstack
background,
as
well,
so
fairly
familiar
with
those
kind
of
use
cases,
but
overall,
what
we've
seen,
at
least
from
my
perspective
when
we
get
to
these
remote
edge
locations
and
talking
about
federation,
to
that
those
remote
points
of
presence
are
fairly
independent,
so
they
are
really
standalone
clusters
like
chick-fil-a
example.
B
Those
clusters
are
completely
standalone.
What
you
need
is
you
need
unified
management
and,
ideally
automated
get
ops
model
when
you're
managing
those
environments,
but
as
for
federation
and
some
service
chaining
or
something
between
those
locations.
It's
not
really
requested,
at
least
in
this
large-scale
amount
of
use
cases.
F
Yeah,
actually,
that's
consistent,
because
with
what
with
what
we've
seen
is
that
the
the
use
cases
that
we've
seen
yeah
first,
the
the
edge
clouds
are
fairly
independent
and
independent,
even
in
the
sense
that
you
know
if
they
get
network
isolated
from
the
central
cloud,
they
they
need
to
be
able
to
con
continue
providing
full
service.
F
So
they
are
like
fully
independent
standalone
clouds
and
and
yeah
the
orchestration
that
we've
done
has
been
more
around.
You
know
kind
of
infrastructure.
Orchestration,
like
you
know,
I
I
you
know,
you
know
getting
identity
orchestration.
You
know
for
user
id
and
logins
common
across
on
managing.
You
know
in
openstack
managing
glance
images
across
them
all.
F
You
know
managing
things
like
nova,
flavors
and
you
know
key
pairs
and
security
groups,
and
it's
kind
of
common
configuration
aspects
that
that
typically
someone
if
they're
managing
you
know
doing
some
high
level
management
across
all
the
sub
clouds
kind
of
typical
kind
of
configuration
things
that
he
he'd
want
to
manage
globally
across
them
all.
D
A
B
Correct,
I
would,
I
would
expand
on
that.
This
usually
is
being
solved
by
layering
the
configuration
templates
so
that
you
have
some
global
configuration
and
then
in
another
layer,
you're
specifying
for
specific
region.
And
then
you
have
another
layer
of
configuration
which
is
specific
to
a
location
which
you
need.
So
that
is
very
often
done
by
layering.
The
configuration
files.
D
The
other
interesting
pattern-
and
I
have
I
lost
some
of
the
work
on
my
laptop-
that
I
spelled
coffee
on,
but
I'll
I'll
go
back
and
recreate
some
of
it
is
the
kubernetes
supports
oidc
as
a
as
a
source
of
authorization
and
identity
for
for
clusters.
So
you
can
set
all
of
your
clusters
to
recognize
the
same
issue
or
so
the
demo
I
built
was
using.
D
F
Hey,
can
I
ask
what
kind
of
oidce
I
don't
know
what
the
proper
term
is,
but
connector
you
were
using
for
that
like
I
would
I've
been
trying
I've
been
kind
of
playing
around
with
decks?
Is
that
one
of
the
more
popular
ones
or.
D
Yeah,
that's
certainly
one
of
them,
so
we
have
a
managed
service
that
goes
under
both
firebase
auth
and
now
we
have
it
called
a
custom
or
cloud
identity
platform
on
google,
but
it's
the
same
back
end.
It
started
as
firebase
auth
and
it
was
built
to
allow
people
to
build.
You
know
mobile
applications
to
make
it
really
easy
to
log
in
with
x,
y
or
z,
but
then
it
becomes
its
own
issuer.
D
D
Another
observation
on
the
multi
cluster
so
that
I'm
seeing
you
know
chick-fil-a
does
this
where
we
have
a
a
couple
projects
where
I
think
the
name
now
goes
under
anthos
conflict
management,
but
the
idea
of
using
sort
of
git
ops
for
declarative
policy
across
multiple
clusters.
So,
let
me
see
was
it
was
it
haramir.
D
Okay,
so
you
pronounce
the
j,
so
jeremy
was
saying
like
this
idea
of
layers
right
so
that
you
would
have
you
know
one
or
multiple
git
repos,
that
each
contribute
to
those
layers,
and
then
each
cluster
would
have
a
task
in
a
privileged
namespace
that
pulls
those
configurations
down
and
then
applies
them
or
run
certain
controllers
that
apply
them
and
that
that
seems
to
be
a
pretty
good
pattern
for
applying
that
kind
of
global
multi-cluster
policy.
B
Especially
in
large
numbers,
so
if
you
have
thousands
of
clusters
to
manage
this
is
almost
a
must
yeah.
D
The
other
thing
I'm
seeing
is
that
in
general,
people
are
looking
at
a
namespace
as
a
global
namespace
for
authorization,
meaning
if
you
define
a
namespace,
you
define
it
on
all
of
your
clusters
and
it
represents
a
certain
kind
of
role,
workload,
etc.
So
if
it's
an
edge
cluster,
that
name
space
might
be
running
workloads
that
are
communicating
with
equipment.
That
is,
you
know,
mission,
critical
or
sensitive
at
every
installation
where
that
edge
is
installed,
but
that
you,
wouldn't
you
wouldn't
end
up
you.
D
You
want
to
avoid
a
pattern
of
coming
up
with
sort
of
edge
location,
prefixed,
name
spaces
just
because
then
you
just
deal
with
the
proliferation
and
management
that
makes
it
harder
to
use
that
kind
of
global
config.
A
B
It
right
yeah,
you,
you
just
define
you
need,
I
don't
know
high
computing,
specific
class
or
something
like
that
to
really
be
very
specific
on
what
kind
of
workloads
you
need,
but
don't
be
specific
to
this
is
the
location
I
needed
to
unless
you
really
need
to
target
that
specific
location
for
some.
B
A
Want
to
second
this
yeah,
I
think
it's
that
should
be
it.
I
think,
yeah
we
had
a
good
attendance
despite
I,
I
wasn't
expecting
this
this
much
people.
I
said
it
at
the
beginning
at
the
in
the
middle
of
the
july,
but
it's
good
that
the
interest
is
here
so
yeah.
Let's
keep
on
in
touch
on
the
slack
around
all
the
topics
that
everybody
is
interested
in
and
and
see
you
in
two
weeks.