►
From YouTube: Kubernetes WG IoT Edge 20181026
Description
October 26 2018 meeting of the Kubernetes IoT Edge Working Group discussion of the Linux Foundation Open Glossary of Edge Computing
A
Let
me
know
when
I
can
continue:
yeah
okay.
So,
as
I
was
saying,
I
I
I
represent
the
open
glossary
of
edge
computing,
which
is
a
linux
foundation
project
and
I'm
doing
the
work
to
liaison
with
other
groups
that
have
an
active
interest
in
advancing
the
industry
around
common
terminology
and
there's
a
project
that
I'm
just
about
to
kick
off,
which
is
gaining
quite
a
bit
of
momentum.
A
So
one
of
the
things
when
I,
when
I've
gone
to
people
in
the
community
and
I've,
asked
for
feedback
one
of
things
they've
asked
to
do,
particularly
if
they
have
a
an
active
interest
in
an
area
of
edge.
So
the
three
big
areas
are
the
infrastructure
edge
the
device
edge,
and
then
the
software
stack
and
they're
they're.
I've
gotten
a
number
of
requests
to
sort
of
go
two
or
three
levels:
deep
on
a
taxonomy
for
those
different
areas.
A
So
imasons
and
the
tia
are
working
on
an
edge
infrastructure,
taxonomy
and
I'm
looking
for
volunteers
to
help
with
the
software
stack
and
or
the
device
edge
taxonomy
and
we're
going
to
call
it
the
taxonomy
project
and
again
run
it
as
part
of
the
open
source
effort
linux
foundation.
A
So
if
anybody
would
like
to
has
any
interest,
just
you
know,
drop
me
an
email
or
hit
me
up
and
chat
during
this
meeting
and
I'd
be
happy
to
to
involve
you,
and
I
will
continue
to
report
to
this
group
and
and
let
you
know,
progress.
B
Great,
have
you
have
you
kind
of
gathered
the
first
order
of
just
inventories
of
taxonomies
that
are
already
done
and
then
are
you
looking
to
kind
of
blend
those
into.
A
Yeah,
so
so
yes,
and
no
so
when
we
did
the
state
of
the
state
of
the
edge
project
which
the
glossary
spun
out
of,
we
did
do
a
pretty
comprehensive
taxonomy.
But,
to
be
honest,
we
didn't
go
as
deep
as
some
of
the
working
group
folks
want
to
go.
So,
for
instance,
I
masons
they're
at
the
edge
the
edge
meeting
in
austin
and
they're
having
their
crowdsourcing
taxonomies
and
they're,
going
deep
into
things
like
cooling
resilience.
A
Size
data
centers-
you
know
things
like
that,
so
I
my
my
my
goal
here
would
be
to
take
the
existing
work
in
the
open
glossary,
but
also
to
have
attached
the
working
groups
with
identifying.
You
know
existing
material
that
we
could.
B
I
guess
my
other
quick
comment
is
have
have
you
been
looking
at
the
work
done
in
the
kind
of
first
white
paper
from
this
group
to
see,
if
there's
any
conflict
in
in
kind
of
definitions
or
do
we
do
you
think
what
we
have
is
is
pretty
well
aligned.
A
I
think
it's
pretty
well
aligned
because
we
seeded
it
with
the
the
concepts
from
the
open
glossary
and
you
know
I've
been
involved
with
the
cncf
since
its
foundation.
So
I
think
it
is
fairly
well
aligned,
but
I
think
it
we
should
attempt
to
to
fully
align
them
before
the
the
paper
goes
out
and
the
way
we're
approaching
this
is
you
know
it's
like.
A
I
said
it's
an
open
source
product
project
sitting
in
a
github
repo
anybody
can
raise
pull
requests
or
issues,
and
what
I
would
propose
is
maybe,
when
we
get
to
the
point
where
we've
we've
walked
down
on
some
of
the
terminology
for
this
working
group,
that
maybe
you
know
one
or
two
of
us
go
and
rationalize
that
with
what's
in
the
open
glossary
and
then,
if
we
want
to
propose
changes
to
harmonize
them,
I
can
steward
those.
C
A
C
Did
you
have
anything
to
to
add
to
this?
This
topic.
C
So
yeah,
maybe
this
this
is
a
good
segue
in
in
to
another
topic,
and
so
we
had
a
eclipse
in
europe
this
week
with
the
with
the
iot
day,
there
and
harley
actually
had
a
had
a
a
lightning
talk
on
the
on
the
edge
use
case
within
the
siemens
and
and
and
some
of
the
problems
they
are
taking
there.
So
I
propose
that
maybe
next
week
he
can,
you
know,
give
us
a
similar
presentation
for
for
this
group,
and-
and
maybe
you
know,
we
can
see
how
to
proceed
with
this.
C
But
to
me
it
looks
like
one
of
you
know.
It
could
be
based
for
one
of
the
use
cases
that
that
you
want
to
tackle
so
maybe
actually
start
start
organizing
around
that.
What
we
were
talking
that
we
want
to
do
so,
for
some
of
the
use
cases
defined
start
to
creating
reference,
architectures
blueprints
demo
deployments
and
things
like
that,
but
maybe
just
start
with
the
with
the
presentation
next
week
and
and
see
what
everybody
thinks
about
it.
B
Yeah
any
objections
I
think
I
mean
it
was
great
when,
when
cindy
presented
cube
edge,
so
I
think
anything
we
can
do
to
kind
of
give
use
this
group
as
a
chance
to
get
a
preview
of
what's
happening
in
the
in
the
member
sort
of
sub
ecosystems
and
I'll
be
happy
to
at
some
point
give
we're
not
things
are
still
too
early
for
for
me
to
do
so,
but
our
own
kind
of
edge
offering
is
continuing
to
build.
So
I'd
love.
B
C
B
One
one
anecdote
that
I
I
think
this
group
would
be
kind
of
interested
in
is
we
had
one
of
our
cloud
summits
in
london
a
couple
weeks
ago
and
I
had
a
chance
to
talk
to
kelsey
hightower
a
bit
about
some
of
the
edge
and
kubernetes
stuff
and
I'm
sure
most
of
you
know
his
name
from
the
kubernetes
side
of
things,
but
it
it
actually
it's
interesting
because
he
had
started.
B
He
had
an
earlier
career
google
in
the
data
center
team
in
earlier
stages
of
data
center,
where
he
did
a
lot
of
bare
metal,
configuring
of
servers,
etc,
and
so
we
talked
about.
I
think,
one
of
the
things
that's
most
fundamental
to
what
makes
kubernetes
for
edge
different
than
other
environments.
Is
that
you're
often
dealing
with
the
bare
metal
problems
of
starting
from
from
scratch
and
with
a
few
machines?
And
he,
if
you
know
his
style,
you
know
he
sort
of
pooh-poohs
complexity
to
the
extreme.
B
So
his
point
was
around
how
you
can
get
a
simple
sort
of
three
to
five
node
kubernetes
cluster
on
bare
metal,
going
by
not
trying
to
come
up
with
fancy
base
images
etc
and
just
doing
a
degree
of
hard
coding
of
ip
addresses
and
network
ports
and
then
have
a
pixie
boot.
You
know
and
and
dhcp.
So
I
think,
I
think
part
of
what
I
think
is
an
interesting
space
to
explore
in
the
realm
of
kubernetes
on
the
edge
is:
is
the
life
cycle
management
part
at
the
beginning
right?
How
do
we?
B
How
do
we
make
it
easier
for
an
ot
environment
group
who's
not
going
to
be
able
to
assume
things
like
vmware
are
running
on
a
large
cluster
already?
How
do
they
get?
You
know
a
brand
new
sort
of
three
to
five
node
sort
of
mini
kubernetes
cluster
up
and
running
and
cube.
Admin
has
made
big
improvements
in
that,
but
I
think,
there's
still
a
a
bunch
of
of
interesting
stuff
that
maybe
this
group
can
can
kind
of
contribute
to
that
isn't
already
covered
by
some
of
the
other.
E
Yeah
my
experience
talking
to
people
is
you
already.
You
know
everything
you've
said
started
with
this
premise
of
a
three
to
five
now,
but
I'm
talking
to
people
who
are
talking
about
one
and
two
node
systems,
and
you
also
drop
that
line
of
a
large
vmware
cluster.
But
there
are
plenty
of
people
I
mean
people
have
been
using
vmware
and
other
hypervisors
for
a
decade
now
at
edge
locations,
and
it
doesn't
have
to
be
a
large
cluster.
E
There
might
be
a
case
for
being
that
being
a
stronger
argument
for
running
on
a
hypervisor
of
some
sort.
You
know,
I'm
I
don't
mean
to
plug
my
own
employer's
product.
It
could
be
anybody's
hypervisor,
but
being
able
to
have
triple
redundancy
on
your
sed
gives
you
the
operational
advantage
of
being
able
to
patch
these
without
taking
an
outage
and
doing
that
on
granted.
E
You
could
run
multiple
instances
of
a
service
on
top
of
linux
itself
or
multiple
containerized
instances
of
fcd,
but
you
you're
still
going
to
have
to
do
kernel
patches
and
having
the
hypervisor.
There
gives
you
a
means
to
do
that
and
perhaps
stage
the
new
version
before
the
older
one
has
been
taken
off
the
air
and
recover
from
tricky
things
like
an
attempted
update
that
gets
halfway
through,
and
you
know
that
can
be
a
real
bad
day
at
an
unattended
site.
B
E
Yeah
and
firmware,
but
the
thing
with
the
hypervisor
is
that
many
of
them
would
allow
you
to
migrate
from
one
node
to
another.
So
if
you
had,
I
mean
granted,
you
can't
do
this
on
single
node,
but
if
you
have
two,
you
can
potentially
temporarily
migrate
your
critical
workloads
off
of
one
so
that
you
can
even
do
a
firmware
patch
on
the
hardware
itself.
B
Yeah,
no,
I
mean
in
fact
steve.
I
don't
know
if
you
have
any
any
pointers
to
kind
of
docks
or
outlines
of
how,
how
one
sort
of
operationalizes
many
small
installations
of
of
sort
of
vmware.
Where
you're
you
know
your.
You
have
to
have
to
figure
out
a
way
to
economize
the
the
setup
time
because
you're
doing
it
for
such
a
small
investment
of
cl.
You
know
nodes
yeah,.
E
Right,
I
think
that
the
search
terms
you
might
need
to
find
those
like-
I
say
it's
they've-
been
people
have
been
using
that
for
years
and
before
the
term
edge
became
popular,
so
they're,
often
perhaps
more
discoverable,
looking
for
robo
like
remote
office
branch
office
and
terms
like
that,
rather
than
terms
like
edge
and
iot,
but
those
are
out
there
and
then,
of
course,
it
being
a
commercial
product.
It
could
be
that
some
of
these
knowledge
based
things,
are
somewhat
shielded
behind.
You
know
corporate
websites,
but
they're
they
are
out
there.
E
I
think
a
big
problem
in
addition
to
just
patching
is
actually
the
scale
of
this.
I
mean
I've.
Seen
people
come
up
with
a
demo
of
you
know,
using
kind
of
moderate
automation,
to
deploy
a
three
to
five
node
system
in
just
an
instance
of
one,
but
I
think
the
bigger
problem
is:
what
do
you
do
if
you
want
ten
thousand
of
these
and
you
know
manage
them
at
an
enormous
scale
and
then
get
some
that
drift
over
time
as
well.
E
You
know,
realistically
somebody
putting
out
ten
thousand
or
even
one
thousand
locations
isn't
going
to
do
it
in
one
big
bang
and
then
update
the
hardware
and
platform
in
one
big
bang
incident
they're
going
to
have
a
lot
of
non-uniformity
in
terms
of
what's
out
there
at
those
edge
locations
and
it
I.
I
think
that,
when
you
architect
it
means
to
it
administer,
administer
these
remotely
at
scale.
You
have
to
assume
that
there's
going
to
be
differences
in
each
of
these
nodes,
which
makes
the
problem
tougher.
E
E
So
if
you
don't
mind
in
the
agenda,
I
added
an
item
just
to
do
a
recap
of
what
I
found.
I
just
came
back,
I'm
I'm
pretty
jet
lagged
after
20
hours
of
travel,
but
I
I
gave
a
presentation
of
kubernetes
at
edge
at
open
source
summit
in
edinburgh
and,
if
anybody's
interested,
I
put
a
link
to
the
deck
in
the
notes
and.
E
So
if
you
were
to
read
the
blog,
you
perhaps
don't
aren't
going
to
get
a
lot
out
of
looking
at
the
deck,
because
it
was
an
attempt
to
condense
it
down.
But
some
of
the
discussions
afterward
and
the
people
who
showed
up
with
interest
in
the
topic
were
perhaps
more
interesting.
E
I
would
describe
at
least
the
you
know.
You
can't
tell
everyone
in
the
audience,
but
a
group
of
us
took
it
into
the
kind
of
a
long
hallway,
discussion
afterward,
and
it
was
almost
all
telecommunications
people
that
were
interested
in
this
topic,
and
I
can't
say
that
there,
if
I
was
to
summarize
it
it's
not
like
they're
on
kubernetes,
now
they're
thinking
they
might
like
to
go
there
and
have
a
need
for
something
like
it
is
the
the
way
I
described
it.
E
I
also
met
somebody
from
edgex
foundry,
which
is
an
attempt
to
deliver
and
manage
at
scale
containerized
applications
for
iot
and
edge,
and
they
expressed
an
interest
in
this.
E
This
problem
of
bootstrapping
is
something
they're
encountering
too
and
came.
The
organizer
of
this
edgex
foundry
came
to
me
saying
that
they've
been
looking
at
kubernetes,
as
perhaps
an
attempt
to
help
them
address
this
bootstrap
problem
preston
was
alluding
to
thinking
that
we
had
all
the
answers
already
and
it
was.
I
don't
know
in
a
way
kind
of
funny
that
both
sides,
I
I
went
to
an
edge
x,
boundary
hoping
that
maybe
they
had
some
magical
solution
for
the
bootstrap
problem,
but
there
might
be
some
value
in
even
that
group
getting
involved.
E
So
a
lot
of
these
people
said
they
would
be
joining
this
group,
but
the
set
that
conference
didn't
end
until
yesterday,
so
I
suspect
that
many
of
them
didn't
travel
that
evening
and
are
off
the
air
today.
C
B
When,
when
you
said
steven,
they
were
interested
in
in
kind
of
kubernetes
in
this
sort
of
abstract,
you
know
thing:
were
they
were
they
do
you
think
they
were
conflating
literal
kubernetes
as
like
a
running
cluster,
with
pods,
etc
versus
sort
of
adopting
kubernetes
design
patterns
to
other
real
things
in
their.
E
Operations
when,
let's
define
the
they,
because
there
were
two
groups-
the
telcos
and
the
edgex
foundry,
so
I'm
gonna
go
with
the
telcos.
So
the
telcos
have
this
vision.
I
can't
speak
for
every
one
of
them,
but
a
lot
of
them
were
talking
about
a
new
world
of
lte
5,
bringing
very
good
connectivity
out
to
edge
locations
and
they've
already
got
things
like
cell
towers
and
switching
offices
that
would
be
capable
of
hosting
some
compute
that
moves
out
to
the
edge
to
take
advantage
of
latency
advantages
and
resiliency.
E
Where
that
link
between
the
sensors
and
the
data,
they're
collecting
is
thought
to
be
far
more
reliable
than
getting
that
up
to
a
central
cloud,
location
or
a
small
handful
of
central
cloud
locations,
and
they
have
this
vision.
They
need
to
host
their
own
compute
and
software-defined
switching
workload
for
sure
so
they're
paying
they're
paying
for
those.
Let's
call
them
mini
data
centers.
If
you
will,
you
know
they're
not
going
to
be
aws
like
or
google
like
data
centers,
but
they
are
still
data,
centers
and
they've.
Already
gotten
used
to
this
problem.
E
They
actually
gave
me
a
reference
to
a
book
saying
that
one
of
them
was
lamenting
the
fact
that,
over
in
the
compute
side,
there's
a
bunch
of
redefinitions
of
terms
related
to
high
availability
that
they
deemed
to
have
been
solved
problems,
and
I
posted
the
link
to
that
book.
E
I
think
in
the
slack
of
this
group,
but
they
said
that
there
are
people
who
appear
to
be
reinventing
things
over
on
this,
our
side
of
the
world
that
the
telcos
viewed
as
solved
problems
in
the
90s,
and
I
I
there
might
be
some
truth
to
that.
I'm
I'm
going
to
go
get
that
book
myself,
but
they
had
two
things
that
they
were
interested
in.
They
think
the
workloads
they're
going
to
deploy
at
these
locations
will
be
containerized
and
they
will
be
remotely
managed,
and
you
know
that
starts
you.
E
That
should
start
you
on
a
path
if
you
do
a
search
for
existing
technology,
where
kubernetes
pops
up
as
a
possible
solution,
so
that's
kind
of
the
stage
it's
at.
They
also
are
concerned
with
issues
like
well.
Cindy's
topic
brought
up
the
subject
of
leaf
node
to
leaf
node
data
communications,
where
they
don't
want
to
do
hair
pins
up
to
the
central
cloud,
the
state
of
kubernetes.
E
For
for
that,
I
don't,
I
think,
is
perhaps
open
they're
not
convinced
kubernetes
would
work
for
them,
but
they're
looking
at
it
and
that's
kind
of
the
stage
they're
in
and
after
hearing
them
talk
about
their
needs.
It
sounds
like
you
know:
they're
one
of
the
issues
with
kubernetes
is
people
often
come
up
with
this
initial
approach
that
gets
shot
down
of
the
kubernetes
control
plane
being
in
the
public
cloud,
managing
worker
nodes
remotely,
but
there's
a
big
issue
with
that,
not
just
the
scale
but
the
current
limit.
E
I
think
the
current
kubernetes
limit
is
5000
nodes
and
that
isn't
enough
for
those
kind
of
use
cases
how
secure
that
path
is
and
what
happens
when
that
path
goes
out.
You
know
they'd
really
prefer
a
situation
where
it's.
E
What
in
the
control
world
would
be
called
supervisory
control
where
the
central
puts
it
in
a
desired
state,
but
you
could
perhaps
lose
your
communication
to
that
central
for
hours,
maybe
even
days
and
still
continue
to
operate
and
even
recover
from
power
cycling
and
I'm
not
sure
that
the
current
state
of
kubernetes,
if
you
put
your
control
plane
up
at
the
central
cloud
and
simple
worker
nodes
down
at
those
edge
locations,
is
up
to
that
mission.
Maybe
we
could
get
it
there,
but
I
don't
think
it's
there
today
anyway.
Did
that
answer.
D
Your
question,
but
I
think
that
goes
back
to
that
the
cluster
federation
still
is
in
a
in
a
very
early
state
of
kubernetes.
B
B
B
The
the
the
reason
I
asked
about
that
you
know
whether
people
were
looking
at
kubernetes
as
as
a
literal
versus
an
analogy
to
their
to
their
world
is
because
we
we
earlier
this
week
we
had
kind
of
an
in.
We
have
an
annual
internal
alphabet
like
pan
alphabet,
wide
iot
summit.
B
That
brings
in
you
know,
nest
and
the
hardware
team
and
the
cloud
team,
and
one
of
the
groups
is
our
internal
smart
building
group
and
this
guy
colleague
trevor
he's
in
the
past,
used
software-defined
networking
as
sort
of
a
comparison
for
some
of
the
building
problems
and
he's
now
trying
to
switch
to
using
kubernetes
as
an
analogy
because
it's
it's
dealing
with.
You
know
the
control
and
orchestration
of
all
these
different
pieces.
B
But
it
got
us
on
a
conversation
which
made
me
think
of
one
of
the
other
opportunities
for
kubernetes
and
the
edge
is
to
use
the
the
core
os
operator
pattern
right.
This
idea
of
taking
a
custom
resource
in
kubernetes
as
a
control,
plane
orchestrator,
but
whose
actions
are
taken
entirely
on
external
things.
B
So
an
operator
running
in
the
kubernetes
on
the
edge
whose
actions
are
all
taken
on,
say
things
like
building
management
systems
and
other
other
external
to
the
cluster
systems,
but
still
the
the
logic
of
the
operator
running
in
the
cluster.
I
don't
know
if
that's
something
anyone
else
has
kind
of
come
across
as
in
conversations
but.
E
Well,
I
like
that
idea
for
contemplation
and
yeah
that
at
open
source
summit,
I
think
that
the
the
telcos
are
completely
uncommitted.
They
they
like
what
they
see
on
kubernetes,
on
kind
of
the
collateral
tool
chain
is
something
that
if
you
get
an
api
that
is
kubernetes
like
enough,
maybe
you
can
continue
to
utilize
all
of
this
collateral
activity.
That's
one
of
the
highest
velocity,
open
source
things
going
on
on
the
planet,
and
if
you
went
off
on
your
own
in
something
just
a
little
different
for
a
little
more
advantage,
it's
probably
not
worth
it.
E
You'd
have
to
have.
You
know
to
not
to
not
use
that
kubernetes
api
to
a
degree
where
you
could
utilize
some
of
this
other
tool
chain.
Collateral
you'd
need
a
really
good
reason,
and
yet
they're
just
doing
the
evaluation
to
to
make
the
decision
that
didn't
appear
to
me
that
anyone
has
made
a
decision.
Yet
sounds
a
lot
like
that
internal
discussion
group.
You
had.
B
I
mean
the
building
team
is
just
dealing
with
so
much
heterogeneity
and
legacy,
which
is
kind
of
par
for
the
course
in
a
lot
of
edge
environments
right
and
so
right.
The
idea
for
a
telco,
for
example,
to
say,
look
you
you
don't
have
to
basically
port
everything
to
kubernetes
to
take
advantage
of
some
of
what
kubernetes
offers
right.
If
you,
if
you
have
ways
to
interact
with
these
legacy
systems
and
you're,
simply
trying
to
add
more
kind
of
sane
controller
logic
you
can
deploy.
F
E
There
are
a
lot.
There
seems
to
be
a
lot
of
advantage
to
that
concept
of
managing
things
that
doesn't
itself
have
to
be
an
architected
object
inside
you
know,
kubernetes
or
whatever
itself.
Yeah
I'd
agree
with
that.
In
fact,
you
know
some
of
the
things
that
obviously
are
issues
where
kubernetes
isn't
ready
to
go.
In
my
opinion,
and
others
are
even
things
with
dealing
like
with
mis
mixed
architectures
of
compute.
E
You
know
a
lot
of
the
assumptions
in
kubernetes
right
now,
not
only
contend
that
you've
got
similar
cpus
within
the
worker,
node
cluster,
but
same
architecture,
x86,
I'm
arm
whatever,
but
in
these
edge
things
I
think
you're
going
to
be
a
lot
less
homogeneous
and
it
isn't
clear
how
well,
if
at
all,
kubernetes
is
prepared
to
deal
with
that.
B
Although
I've
done
some
experiments
with
no
taints
and
traits
for
architecture,
so
I
guess
I
I
I'm
curious
where,
where
things
actually
break,
because
I
think
there's
some
primitives
that
can
be
used
for
that
kind
of
thing.
E
Yeah,
but
it's
almost
like
they're,
maybe
a
little
too
band-aid-like,
where
you
have
to
engage
in
non-default
manual,
configuration
to
a
large
degree,
and
I
don't
know
if
it
would
be
better
if
somehow
that
was
sort
of
built
into
the
nervous
system
so
that
it
doesn't
have
to
go
up
to
the
conscious
brain
to
yeah.
I.
B
Mean,
for
example,
you
cannot.
You
cannot
create
a
multi-architecture
kubernetes
deployment
resource
because
you're
referencing,
an
image
an
image
is
going
to
be
bound
to
a
certain
architecture.
So
sort
of
the
best
you
can
sort
of
do
is
have
two
deployments,
one
for
arm
one
for
x86
and
then
use
node,
taints
and
traits
so
that
they
only
run
on
the
nodes
of
the
matching
architecture.
So,
as.
F
B
That's
that
feels
very
ugly.
It
gets
it
done
right,
but
it,
but
it
feels
like
it's
forcing
a
lot
of
extra.
You
know
kind
of
grossness.
E
Even
you
know
you're
you're
going
to
want
to
bring
in
things
like
you
know,
one
of
the
things
they're
looking
at
kubernetes
for
is
image,
distribution
and
they're
you're,
going
to
start
looking
at
image
repositories,
maybe
even
helm,
chart
repositories
or
something
like
health
charts,
and
it
isn't
clear
how
well
they
addressed
a
mixed
architecture
scenario
either
you
can
certainly
use.
E
I
guess
you
could
use
independent
clusters,
but
if
you've
got
components,
maybe
some
data
collection
that
are
an
arm
that
has
to
work
with
process
compute,
that's
on
x86
and
engage
in
an
update
that
would
boost
the
version
of
both
of
those.
At
the
same
time
you
it
would
be
nice
if
a
platform
could
address
that.
A
While
we're
on
this
topic,
this
is
matt
trophero
again
I
lose
you
guys.
A
Oh
sorry
yeah.
I
guess
my
my
zoom
canceled
up.
I
still
have
the
audio
call
two
things.
One
is
my
day.
Job
is
100
focused
on
deploying
data
centers
tightly
integrated
with
the
telco
infrastructure.
A
E
I
guess
I
I'd
be
happy
if
anybody
on
this
call
would
like
to
put
yourself
on
the
agenda
to
talk
about
something
on.
You
know
a
future
beating
of
this
group
that'd
be
great.
I
I'm
I'll.
I
guess
it
isn't
up
to
me.
I'm
not
the
chair
of
the
group,
but
you
know
go
for
it.
I
I
think
we're
pretty
informal.
So
if
you
just
go,
add
future
dates
to
that
running,
agenda,
notes,
document
and
put
yourself
down
with
a
time
estimate
of
how
long
you
need
I'd
say,
go
for
it.
E
I
think
that
the
somebody
from
the
edgex
foundry
group
did
want
to
perhaps
talk
at
this
group
and
give
a
presentation,
I'm
not
sure
anybody
from
those
any
of
the
telcos
was
prepared
to
do
that.
A
Yeah,
so
so
I'll
put
some
time
on
the
agenda
in
a
future
meeting
to
talk
about
telco
infrastructure
and
I'll
bring
someone
who
actually
is
from
that
that
that
side
of
the
world
as
well
also,
there
are
at
least
a
couple
of
software
companies
startups
that
are
doing
really
interesting
things
with
kubernetes
on
the
edge
they
aren't.
You
know
they're
kind
of
spread
thin
so
so
participating.
A
This
group
hasn't
been
a
priority
for
them,
but
one
of
them
is
rafae
systems,
and
I
think
we
should
invite
hasid
pagani
these
ex
akamai,
because
I've
talked
to
him
at
length
and
the
work
he's
doing
he's
actually
very
interested
in
in
contributing
it
to
the
foundation.
A
You
know
and
or
up
streaming,
some
of
it.
So
I
think
from
an
article.
E
A
A
I
I
think
they're
they're,
largely
stealth,
but
if
you
read
their
blog,
you
can
kind
of
get
a
sense
of
of
what
they're
trying
to
do.
E
I
see
cindy
popped
in
on
the
notes
saying
her
audio
doesn't
work,
but
before
we
run
out
of
time,
one
of
the
things
I
wanted
to
bring
up
was
that
some
of
us,
I
think,
are
going
to
cubicon
shanghai
and
maybe
we
can
arrange
to
physically
meet
at
some
point
during
that
event.
Maybe
if,
since
cindy's
speaker
is
down,
maybe
we
can
just
set
up
a
block
in
the
notes
for
people
who
want
to
meet
at
cubicon
shanghai
to
just
put
their
name.
C
C
Yeah
and-
and
I
think
chris
also
mentioned
on
slack-
that
they
want
to
have
some
media
coverage
for
for
the
group
there.
So
it
would
be
great
if
you
guys
that
are
going
to
attend
that
we
should
try
to
get
in
touch
with
chris
and
see
if
we
can
help
with
anything
there.
We
don't
have
any
formal
session,
but
any
other
informal
thing
that
we
can
do
there
it'd
be
good.
C
And
just
to
to
add
to
the
previous
discussion,
I
I
I
agree
with
preston
that
operator
framework
could
be
a
really
really
good
good
tool
for
for
for
solving
some
of
some
of
these
issues,
I
think
yeah
something
we
should.
We
should
try
to
dig.
C
B
I'm
gonna
I'm
gonna
kind
of
just
I
saw
adam.
You
add
your
name
to
the
to
the
notes
and
coming
from
docker.
I
don't.
I
don't
know
that
I've
heard
docker's
kind
of
quick
summary.
You
know
the
concise
version
of
what
is
what
is
docker
sort
of
thinking
about
edge
explicitly
as
far
as
notary
or
other
places,
you
think
docker
stuff
is
particularly
going
to
be
relevant
on
the
edge
beyond
kind
of
where
it's
relevant
everywhere.
G
G
We
are
looking
at
extending
our
our
stack
tools
and
services
to
the
edge,
and
I
guess
you
know
beyond
as
much
as
possible
we're
working
with
some
early
customers
extending
the
platform
to
support
their
requirements,
and
I
wanted
to
join
this
group
or
at
least
sit
on
the
calls
from
for
the
short
term
here
to
to
you
know,
get
some
input,
I
guess
from
the
industry
and
then
see
how
kubernetes
can
be
leveraged
and
what
everyone's
kind
of
working
towards
we're
still
building
out
our
road
map.
G
For,
I
guess
you
know
advanced
features,
but
we
do
have
some
offerings
today
that
can
meet
some
use
cases
in
the
edge
and
then
we're
going
to
be
expanding
to,
I
guess
less
docker
style
edge
deployments
in
the
near
future.
B
B
Do
you
have
sort
of
like
a
pass
through
caching,
you
know
where
we're
basically
of
a
bunch
of
systems
on
on
the
edge
that
are
pulling
containers
through
this
proxy
and
if
it
has
it,
cached
it'll
just
use
that
and
if
it
doesn't
it'll
have
to
go
up
to
a
cloud
hosted
registry.
Maybe
you
pre-seed
it
any
of
those
kind
of
patterns
you
guys
are.
E
F
E
G
So
preston
to
answer
your
question:
yeah,
we
have
a
docker
trusted
registry
and
you
can
mirror
them
and
chain
them
and
there's
like
a
true
mirror
and
there's
also
caching,
depending
on
the
customer
requirements
or
the
topology
you're
looking
for
it
can
be
basically
like
a
a
pure
http
cache
style
or
a
fully
mirrored
reboot.
E
But
yeah
that's
an
interesting
topic,
because
what
you
often
want
to
have
is
some
sort
of
governance
on
that
as
well
along
with
audit
logs.
So
you
can
tell
what
images
ended
up
going
to
where
and
not
indiscriminately
replicated
anything.
But
I'd
encourage
you
preston
to
just
go.
Take
a
look
at
harbor
I'll
I'll
see.
If
I
can.
Oh
it's
already
booked
I've
already.
B
Got
it
okay,
I'm
going
to
look
at
it
because
one
of
the
things
when
I
looked
at
in
in
the
kind
of
not
only
related
to
the
caching
side
but
the
whole.
B
I
I
feel
like
kubernetes,
unfortunately,
punted
on
content,
trust
and
and
docker
has
notary.
Red
hat
has
simple
image.
Signing
and
kubernetes
itself
doesn't
really
have
a
nice
clean
way
of
doing
content,
trust,
and
I
think
it's
it's
just
one
of
those
things.
That's
enhanced
in
edge
environments
because
you
are
potentially
running
on
your
overall
sort
of
trust
in
the
infrastructure
is
a
little
bit
less
right.
So
physical
security
of
the
of
the
machines,
maybe
a
little
less.
B
E
E
So
I
think
arguably
docker
was
in
that
position
at
the
time
so
that
the
docker
image
repository
was
a
force
of
nature.
Not
to
be
you
know
if
they
immediately
started
to
propose
to
replace
that
that
the
willingness
of
the
world
to
uptake
kubernetes
would
have
been
reduced
so
I'll,
forgive
them
what
happened,
but
I
think
that
is
the
my
historical
observation
of
how
that
came
about.
E
I
think
there's
more
to
this
than
just
signing
too,
by
the
way
that
vulnerability
scanning
is
a
key
piece
of
this.
It's
just
like
anti-virus,
where
many
times
there
are
these
latent
security
weaknesses
that
aren't
discovered
until
months
or
years
later.
Yet
you
want
to
have
some
inventory
of
what
your
risk
is.
You
know
what
was
if
this
has
been
out
there.
Where
has
it
been
used
and
for
how
long?
Just
so
that
you
can
go
conduct
investigations
to
try
to
constrain
potential
damage.
G
One
interesting
new
feature
that
docker
has
in
beta
right
now
is
we've
extended
docker
content
trust
to
the
edge
in
the
engine,
so
the
engine
now
does
signature
verification
on
images
and
will
only
run
it
with
the
correct.
You
know
trust
pinning.
So
that's
in
public
data
right
now
and
we'll
be
announcing
soon.
H
Hey
adam
I'd
be
interested
in
understanding
how
the
docker
caching
works.
Do
you
set
up?
Does
it
is
that
a
configuration
and
docker
engine,
or
do
you
point
to
the
cache.
G
That's
a
good
question:
I'm
definitely
not
an
expert
in
that.
I
think
you
point
to
the
cache,
but
I'll
have
to
double
check
for
you.
Okay,.
C
Okay,
so
I
I
might
think
that
discussions
about
the
cubecomp
presentations
we
leave
for
for
another
dedicated
meeting
for
the
people.
That
will
be
there
because
you
know
they're.
These
general
discussions
are
much
more
available
to
the
whole
group,
so
maybe
seeing
this
steve
preston,
we
can
do
it
sometime
next
week
or
something
like
that.
I
know
there's
a
one
thing.
C
For
example,
t
john
wanted
to
join
us,
so
so
we
could
we,
you
know
we
should
see
how
to
organize
more
more
about
that,
but
we
don't
have
time
for
that
on
this
call.
So
maybe
maybe
you
should
do
it
in
a
dedicated
call
and
start
preparing
for
for
the
sessions
in
seattle.
B
Yep,
no,
I
think
that
sounds
good
and
maybe
we
we
start
with
just
the
slack
dm
yeah
when
you
started
there.
C
And
I
think
yeah
the
the
last
thing
that
I
I
wanted
to
see.
How
do
we
better
document
all
these
resources
and
materials
that
that
we
find
out
throughout
the
conversations
and
and
and
the
in
the
meetings
just
so
that
they
don't
get
lost
in
in
those
slack
threads
and
things
like
that?
So
maybe
a
github
page
or
something
like
that?
C
B
Something
better
honestly:
it's
it's
that
kind
of
like
curating
of
of
something
as
vast
as
as
what
we're
talking
about
in
kind
of
kubernetes.
I
just
feel
like
it's
going
to
be
a
it
just
becomes
untenable
from
a
curatorial
point
of
view.
At
some
point
I
mean
honestly,
the
the
best
of
those
kind
of
things
are
when
people
sort
of
do
these
the
sort
of
dash
awesome
lists
in
github.
B
B
B
Yeah,
I'm
hoping
that
as
a
sort
of
the
run-up
to
the
to
the
kubecon
talk,
we
can
sort
of
potentially
come
up
with
some
light
stack,
ranking
of
which
of
the
problems
is
got
a
sort
of
a
rubric
that
results
in
us,
ranking
it
as
something
that
is
both
impactful
and
solvable
to
some
degree
as
a
way
of
actually
doing
a
little
bit
of
work
towards.
B
You
know
kind
of
an
illustrated
reference
of
how
to
solve
that
type
of
problem
right
so
just
to
make
it
to
pair
a
little
bit
of
the
here's.
The
interesting
challenges,
with
here's,
at
least
a
way
to
think
about
solving
that
kind
of
problem
with
kubernetes,
so
whether
that's
an
example
of
an
operator
calling
into
a
legacy
thing
or
something
around
the
you
know,
events
converting
events
to
service
discovery
or
the
networking
qs
any
any
of
these
things
that
we
have
kind
of
now,
inventoried
yeah.
C
Yeah,
I
think,
maybe
that's
very
hard
presentation.
Next
time
will
be
available
because
it
uses
you
know
things
and
edge
labels
just
to
you
know,
move
workloads
between
the
nodes.
So
that's
one
like
not
to
say
low
hanging,
fruit,
but
definitely
valid
use
case
and
and
something
think
about
in
this
context,.
B
And
harold,
if
those
are
available
to
share
in
advance,
if
you,
if
you
can
jam
into
the
notes,
happy
to
kind
of
click
through
the
slides,
if
those
are
posted.