►
From YouTube: Kubernetes WG IoT Edge 20181207
Description
December 7 2018 meeting of the Kubernetes IoT Edge Working Group - presentation on Rafay Edge integration of Kubernetes clusters deployed at edge
A
A
B
C
D
E
Yeah:
okay,
hi
guys,
I'm
sarah,
I'm
I'm
a
principal
engineer
in
the
platform
engineering
group
at
traffic.
E
So
that
it
it
suits
our
needs
at
the
edge.
A
E
Great
everything
is
good,
yes,
so
so
at
a
very
high
level,
the
platform
consists
of
the
core
edges
and.
G
E
E
Talking
about
core,
so
graphical
is
the
brain
of
the
system.
It
coordinates
operations
across
all
the
edges,
making
them
act
as
one
cohesive
unit.
E
It
provides
a
multitude
of
view
of
of
all
platform
primitives
like
like
container
registry
workload,
ingress
key
management,
etc.
All
the
edges
continuously
report
the
events
logs
and
metrics
back
to
the
code
where
they
are
aggregated
and
appropriate
actions
are
taken.
E
It
is
also
responsible
for
dynamic
orchestration
of
workloads
across
across
raf
pages.
The
lifecycle
of
an
edge
is
also
managed
by
graphical
going
to
the
rapid
edges
their
officers
kubernetes
clusters.
They
are
independent,
they're,
they're,
independent
and
self-contained,
so
so
no
to
adjust
share
state
between
them.
E
They
have
customizations
like
kubernetes,
crds,
replicated,
storage,
key
management,
debug
tools
tagged
on
top
of
them
to
make
them
better
integrate
into
the
app
into
their
edge
platform.
F
And
do
you
want
to
pause
and
ask
for
questions
as
we
as
we
go
along
yeah
yeah.
E
F
F
It's
a
small
enough
group
if
anyone
has
has
questions,
go
ahead
and
feel
free
to
ask
as
we
go
I'll
pause
at
the
end
of
each
slide.
F
Okay,
thanks
thanks:
okay,
otherwise
we'll
move
along
this
one's
this
one's
busy.
E
Yeah
so
we'll
look
at
the
exact.
E
We
have
three
or
more
nodes,
so
the
first,
the
first
three
nodes,
so
the
the
first
three
nodes
are
hyper
converged.
What
we
mean
by
hyper
converged
is
we
have
hcd
cube,
master
cube
controllers
and
cube
abs
servers
all
all
running
on
the
same
node,
and
we
run
three
copies
of
them
for
for
availability.
E
We
have
cluster
that
that
powers,
our
replicated
storage,
also
running
on
this
on
these
three
classes.
On
these
three
nodes,
all
the
raf,
all
the
rfi
control
services,
run
and
run
in
a
separate
namespace
on
these
three
nodes.
So
so
so
these
three
nodes
can
also
can
also
accommodate.
Workloads
can
also
accommodate
workloads.
E
When
we
want
to
scale
when,
when
we
want
to
scale
an
edge,
we
can
keep
adding
worker
notes,
leaving
the
height
leaving
the
hyperconverged
nodes,
as
is
so.
We
don't
directly
expose
a
kubernetes
view
to
our
users,
so
so
multi-tenancy
is
achieved
using
namespace
namespaces
resource
quotas
and
network
isolation.
F
C
But
you're
not
you're,
not
isolating
tenant
workloads
per
node
in
any
particular
way.
E
They
are
so
the
networking
is
isolated,
so
so
so,
no
two
namespace.
So
so
in
no
two
customer
namespaces
can
talk
between
themselves.
They
can
talk
within
a
namespace,
but
but
they
can
talk
across
name
spaces.
C
Right
but
you're
not
you're,
not
trying
to
do
you're,
not
setting
any
expectation
of
physical
isolation.
E
No,
no,
no
we're
not
no
we're
not
we.
We
have
additional
policies
where
we
can,
where
we
can
make
make
all
the
containers
in
a
workload
run,
run
on
the
same
node
for
for
performance
reasons,
but
but
but
as
is
we
don't
we
don't
do
we
don't
do
anything,
they
are
free
to
be
scheduled
anywhere.
F
And
for
those
of
you
with
really
good
eyesight,
looking
at
node,
one
where
it
says,
tenant
two
workload,
two,
that's
really
tenant
one
within
tenant,
one's
work,
name,
space,
workload,
two!
So
there
are
multiple
workloads
of
tenant,
one
in
in
node
one.
I
just
thought
that
there's
a
typo
in
this
diagram
good
question,
so
thank
you
very
much
anything
else
on
this,
or
should
I
go.
C
Forward
the
the
crypto
edge,
can
you
say
something
a
little
bit
more
about
that
the
this
is
responsible
for
registering
edge
with
the
core
or.
E
I
I
have
a
slight
about
that,
but
okay,
but
enables
our
key
management
solution
in
the
edge
so
so
bbb.
We
have
a
fully
multi-tenant
key
management
solution
in
in
the
edge
all
all
the
customer.
Private
private
data
like
like,
like
keys
and
secrets
they're
all
they
all
come
encrypted
to
the
edge
you
using
the
organization
case,
and
they
are
decrypted
and
put
in
the
memory
when
when
they
are
needed.
C
See
you
have
have
console,
are
you
using
hashtag
vault
for
that
or
you
have
a
no.
F
What
is
it
one
distinction
from
kubefat,
as,
as
I
understand
it,
this
is
chad
is
that
q
fed
broadly
places
workloads
into
into
into
all
locations,
and
so
we're
trying
to
provide
a
little
additional
control
over
that
and
the
multi-tenant
aspect
of
the
file
system.
View
is
another
aspect
and
we're
trying
to
abstract
away
some
of
the
challenges
of
of
configuration
management
so
providing
a
higher
level
interface
to
the
to
the
edge.
B
E
No,
no,
no,
no,
we
run
off
the
shelf
software
and
this
of
the
certificate,
but
they
have
they
have
our
customization
stacked
on
top
of
them
to,
for,
for
that's
to
be
able
to
talk
to
the
core
so.
B
F
Got
it
yes,
yeah
and
by
hyper
converged?
What
I
think
we
mean
is
the
cluster
control
runs
on
here:
kubernetes
services
at
cd,
cluster
and
so
on
and
customer
workloads.
The
other
way
to
run
the
cluster.
Is
you
have
the
control,
nodes
and
workers?
If
you
have
hundreds
of
workers,
the
overhead
of
dedicated
control
is
is
fine,
but
this
allows
us
to
converge,
control
and
customer
name
spaces
onto
fewer
machines
in
an
edge
location
and
then
okay,
I
got
it.
B
It's
just
the
reason
I
asked
is
that
there
are
some
industry
groups
and
analysts
that
use
this
definition
of
hyper-converged,
meaning
people
who
have
products
that
often
bundle
not
just
hardware
but
a
virtualization
solution
and
kind
of
a
whole
stack
from
top
to
bottom
as
a
product.
So
I
was
a
little
confused.
F
Thanks
thanks
steven,
I
didn't
realize
that
so
maybe
we
can
adjust
the
is
there
a
different
phrasing.
You
would
recommend.
B
F
Okay:
okay,
okay,
great
great
feedback.
Thank
you.
H
Yeah
this
is
a
robbie
from
rafi.
May
I
jump
in
to
explain
the
terminology.
I
think
what
we
meant
by
halfway
converge
at
the
sense
was
the
roles
were
hyperconverged
for
these
nodes,
they're
running
the
master
control
software
for
kubernetes,
plus
they're
running
the
storage
backend,
and
what
they're
running
is
the
workers
as
well.
So
the
rules
of
kubernetes
nodes
in
our
mind
were
hyper-converged
for
these.
E
Yeah,
so
so
most
of
the
customizations
done
to
cooper,
kubernetes
cluster
on
the
edge
are
done
as
crds
graphic
has
its
own
platform
primitives
that
are
exposed
to
end
user.
They
are,
they
are
decomposed.
They
they
may
be
decomposed
into
one
or
more
kubernetes
objects.
E
Having
crds
allows
easy
conversion
between
rapid
primitives
and
kubernetes
objects.
Crd
is
also
a
low
scheme.
Schema
schema
evolution
for
rapid
primitives
and
it
decouples
the
underlying
implementation.
E
C
So
we'll
go
into
each
yeah
so
for
most
of
these
were
most
of
these
things
that
existed
that
you
then
later
wrapped
in
crds
or
were
they
were
they
kind
of
built
with
crds
in
mind?
C
E
No,
no,
no,
no,
no,
I
mean
no,
we
didn't
have
a
pre-existing
platform,
but
but
we
we
didn't,
we
didn't
want
to
expose
exposed
kubernetes
to
the
end
users
directly.
I
mean
we.
We
wanted
to
expose
a
we
wanted
to
expose
and
its
computing
platform,
not
not
a
federated
kubernetes
cluster,
because
we
because
then
we
would
be
really
tied
to
the
actual
kubernetes
versions
and
implementations
right.
It
is
okay
yeah.
So
these
are.
These
are
just
sort
of
modified.
E
Abstractions,
on
top
of
on
top
of
kubernetes,
so
workload,
so
so
the
main
crd
for
us
is
the
workload
crd.
So
workload
is
a
very
loaded
term,
a
graphics
it
is.
It
is
the
fundamental
it
is.
It
is
the
fundamental
unit
of
work
it
contains
all
the
it
kind
of
it's
like
it's
like
a
mix
of
deployment,
services,
ingress,
isolation,
resource
quartus,
everything
and
placement.
So
so
everything
we
expose
to
the
user.
E
We
we,
the
unit
we
expose
to
the
user,
is
the
workload,
is
the
workload
and
and
the
workload
and
the
workload
is
decomposed
into
in
is
decomposed
into
kubernetes
objects
at
the
edge.
E
The
workload
controller
is
responsible
for
the
life
cycle
of
the
workload
and
and
every
time
we
publish
our
upgrade
a
workload
in
the
core,
so
the
metadata
is
is
captured
as
the
workload
spec
and
it
is
synced
to
all
the
all
that
is
designated
edges
and
under
status
is
aggregated
back
in
the
core,
and
the
status
of
the
workload
is
aggregated
back
in
the
code
for
traffic
signations.
C
E
Somewhat
similar,
we
we
had
our
own
requirements
that
that
I
mean
so.
First
of
all,
it
was
not
available
at
the
time
we
started
and-
and
we
had
our
own
unique
requirements
that
that
made
sense
for
us
to
write
our
own
crds,
because
at
the
end
of
the
day,
the
only
thing
constant
is
whatever
kubernetes
is
exposing.
E
G
Yeah
yeah.
E
Next
next
one
yeah,
so
another
crt
we
have
written,
is
the
workload
ingress.
So
none
of
the
existing
english
solutions
support
our
english
needs.
We
will
build
a
custom,
english
solution,
clubbing
l4
and
l7
proxies
the
workload
degree.
Crd
represents
rapid
view
of
of
an
ingress
in
the
edge
we
currently
support,
tls
with
sni
and
and
as
sni
is
used
for
load
balancing
across
multiple
workloads
in
nx.
So
sma
is
how
we
achieve
multi-tenancy
not-so-multi-tenancy
at
an
edge.
B
H
Mean
so
until.
E
So,
first
of
all,
we
are
not.
We
are
not
running
on
public
clouds,
it's
not
interesting
yeah,
so
we
we
run
mostly
on
bare
bare
metal
servers.
So
so,
though,
so
the
main
competing
ingress
solutions
at
the
time
we
started
were
istio
and
in
the
next
ingress
it
still
is.
History
was
no,
it
still
didn't
have
multi
sni
support.
Well
back,
then
they
they
recently
added
it
in
in
the
next
supported
multi
sni,
but
it
had
its
own.
E
E
Also,
we
were
looking
at
other
layer,
seven
features
like
like
rooting
and
tls
termination
and
api
key
validation,
which
were
you
you
you
you
have
to
customize
in
in
linux
anyway,
to
provide
like
apk,
validation
and
android,
limiting,
etc
on
a
on
a
personal
basis.
E
Okay,
yeah
so
another
another
crt
we
have
written
is
the
dynamic
conflict.
So
kubernetes
has
a
1mb
limit
for
config
maps
and
secrets.
B
So
so,
by
address
the
issue
you
mean
you
need
think
you
need
to
go
over
one
way.
E
Yes,
yes,
yes,
yeah
yeah,
yeah,
yeah,
yeah,
so
so
use
using
the
dynamic
config.
We
can
potentially
send
like
gigabytes
of
config
data
to
the
edge,
so
we
only
send
the
metadata.
I
mean
we
only
send
the
metadata
to
the
edge
as
a
timer
conflicts
here,
crt
spec.
C
E
E
Somebody
can
run
varnish
cache
at
the
edge
and
they
they
have
it.
They
they
have.
They
have
a
pre-built
cache
that
they
want
to
bring
bring
it
to
that.
F
In
this
case,
the
term
config,
like
you
know,
text
config
file,
may
may
be
too
restrictive
because
we're
using
this
to
include
other
other
data
that
applications
may
may
need
configuration.
Files
are
one
example
and
we're
the
motivating
example
for
getting
dynamic
configuration
uploaded
into
apps
without
app
restarts,
and
then
so.
B
E
G
E
So
another
another
minor
problem
we
have
noticed
with
the
content
propagation,
I
mean
the
when,
when
you
mount
a
config
map
or
a
secret
to
a
volume
and
change
the
config
mapper
secret
at
runtime,
the
propagation
takes
up
to
30
seconds,
which
is
fine,
but
but
what
happens
after
the
prop
so
to
propagate
it.
So
the
old
file
is
deleted
and
I
mean
the
old
sim
link
is
deleted
and
the
new
sim
link
is
created.
So
if
you're,
using
fs,
notifier
or
something
to
watch
for
events,
it's
not
going
to,
I
mean
so.
E
C
Are
there
I'm
curious?
Are
there
other?
I
mean
this
is
sort
of
cdn
like
in
some
ways
or
or
but
it's
not
really
cdn,
because
it
might
be
in
the
service
of
a
application.
That's
running
on
the
edge,
like
you
say,
cache,
content,
etc.
Have
you
come
across
other
folks
that
have
looked
at
how
they
are?
F
If
you,
if
you
have
an
example
of
other
other
ways,
that
kubernetes
has
done
this,
that
would
that
would
be
great
we're
looking
for
you
know
for
use
cases
and
customer
applications.
We
think
that
placing
state
in
this
way
can
help
with
some.
We
haven't.
B
E
C
Yeah
yeah
well
another
another
example
of
something
that
would
need.
This
kind
of
thing
is,
if
you
wanted
to
do
staged
firmware
updates.
So
if
you
had
a
bunch
of
devices
on
the
edge
and
you
wanted
to
push
firmware
update
that
is
cached
and
then
redistributed
from
that
edge
to
each
device,
you'd
you'd
want
something
potentially
kind
of
like
this
right.
B
E
Yeah,
it
is
read
only
like
on
like,
like
conflict
maps
or
secrets
mounted
as
volumes,
it
is
read
only.
It
mirrors
the
same
functionality,
but
it
just
eliminates
the
1mb
limit.
G
E
So
another
crt
we
have
written
is
traffic
shaping.
So
so
we
we
wrote
this
to
enable
resource
quotas
for
network
bandwidth,
combined
with
kubernetes
network
isolation.
This
enables
us
to
support
network
level
multi-tenancy
in
an
edge.
We
can
shape
north
south
and
east
west
traffic
internet.
This.
This
prevents
from
any
one
single
workload.
E
Network
bandwidth-
and
this
is
also
responsive
periodically,
so
the
controller
is
responsible
for
periodically
reporting
cell
bandwidth
statistics
to
the
core.
F
G
F
Yeah,
oh
sorry,
steven,
I
think
you're
on
mute.
B
Your
use
of
network
attached
storage
also
enter
into
this
traffic
shaping.
Does
it
have
its
own
bandwidth
carved
down
so.
E
Right
now
we
are
not
considering
the
bandwidth
for
storage,
but
but
but
it
is
in
our
roadmap
to
include
the
the
net.
The.
C
And
so
I
I
have
a
couple
questions
here:
maybe
really
quick,
so
you're
able
to
provide
a
single
throttling
point
across
workloads
across
the
cluster
by
flowing
all
traffic
through
the
controller
on
on
the
say,
like
the
northbound
side,.
E
So
so
the
traffic
entering
the
cluster
is
is
already
they're,
they're
they're
only
specific
entry
points
you
can
enter
so
we
be
so
they
have
to
enter
to
our
l7
proxy.
So
so
so
that's
so
that's
where
we
control
the
north
south.
C
Graphical,
but
for
like
egress
right,
if,
if
you
have
workloads
running
on
the
edge
that
are
reaching
others
just
say,
public
internet
services
or
so,
and
you
want
to
limit
their
what?
How
much
like
in
in
the
typical
sort
of
asymmetric
setting
where
they're
you
want
to
limit
how
much
egress
there
is
from
the
location,
because
it's
limited,
like
I've,
been
looking
at
the
the
details
per
node
and
what
kubernetes
provides
as
far
as
traffic
shaping
in
the
cni.
C
E
So
so
so
it's
it's,
it's
not
real!
So
it's
not
so
the
cluster
wide
is
a
side
effect,
so
we
still
set
these
limits,
but
at
a
pod
level.
C
E
In
I
mean
it,
is
it
so
we
have
looked
at
the
bandwidth
plug-in,
but
this
is
in,
but
this
is
in-house.
This
is
this
is
based
on
this.
This
is
this
is
how
the
bandwidth
plug-in
works.
Internally,
I
mean
it's
tc,
it's
a
built-up,
pc,
yeah,
it's
based
on
pc,
but
but
we
have
written
it
ourselves
inspired
it.
It
is
definitely
inspired
by
the
bandwidth
plugin.
E
So
some
of
the
other
customizations
we
have
done
that
it
just
but
not
the
other
customization
is
key
management,
so
confidence.
So
since
two
customer
data,
like
private
keys,
are
sent
and
encrypted
to
the
edge
with
with
organization
specific
encryption
keys,
they
are
decrypted
and
stored
in
memory
at
the
dates
for
other
resources
for
other
resources
to
access
them.
So
so
this
is
fully
multi-tenant
at
the
edge.
E
I
I
have
a
quick
question
so
since
I
joined
late,
maybe
you
were
talked
already,
I'm
just
curious.
What
kind
of
customer
scenario
led
you
to
support
multi-tenant
tenancy
at
edge
and
how
it's
different
from
the
cloud
because
currently
for
the
kubernetes
there's
a
multi-tenancy
sick
talking
about
multi-supporting
multi-tenancy,
so
from
your
implementation
perspective,
what's
special
you're
doing
differently
from
them.
E
So
kubernetes
multi-tenancy
revolves
around
name
spaces:
the
service
name,
spaces
service,
accounts
themselves
and
service
accounts.
Essentially,
yes,.
E
So
we
we
need
some
some
additional
things
to
be
done
to
to
achieve
relative
multi-tenancy.
So
we
we
need.
We
need
resource
quotas,
so
so
cpu
memory
and
storage
resource
quotas
are
already
available
in
kubernetes.
We
we
have
so
but
but
bandwidth
is
not
available,
so
we
be
valid
bandwidth.
We
paired
bandwidth
and.
E
And
and
a
network
isolation
is
if
you
choose
one
of
the.
If
you
choose
flannel
as
you
see
at
night,
you
you
get,
you
get
you
you,
you
can
enable
network
isolation
between
namespaces,
so
so
that
is
also
really
important
for
for
multi-tenancy
at
an
edge
and
and
you
need,
I
mean
you
need
storage.
E
You
need
multitenant,
storage
and,
and
you
need
to
set
limits
on
the
storage
and
iops.
So
so
I
think
I'll
leave
you
you
can.
You
can
set
limits
on
the
storage
in
in
kubernetes
by
default,
but
but
you
can
set
limits
on
the
iops,
so
we
are
kind
of
working
on
that.
E
E
So
we
can't
send
the
key
in
plain
text
or
like
we.
We
can't
just
use
this
system
to
to
encrypt
the
to
encrypt
a
customer
private
key
to
send
send
it
to
the
edge.
E
So
so
we
use
organization,
specific
kecks
in
the
core
and
and
said
and
and
send
the
encrypted
data
to
edge
the
the
the
the
the
it
looks
at
the
workload
spec
and
and
determines
what
scope
the
workload
is
in
I
mean
which
organization
the
workload
belongs
to,
and
then
it
goes
and
talks
to
our
core
to
to
get
a
organization
specific
kick
and
then
it
is
decrypted
and
and
stored
in
memory
in
our
in
this
controller.
E
Similarly,
we
have
api.
We
have
api
keys
that
that
can
also
be
encrypted
in
the
same
way
and
if
the
customer
wants
to
send
any
sense,
any
sensitive
data.
E
H
We
also
introduce
key
hierarchies
so,
where
organization
root
level
keys
are
never
sort
of
distributed
or
used
on
the
edge
they
they
remain.
The
root
of
trust
remains
in
core
and
the
specific
use
case
keys
like
encryption,
key
signing,
keys
integrity,
check,
keys.
H
Those
are
essentially
transported
over
on
demand
to
the
required
edge
for
that
particular
tenant.
For
that
request
for
its
purpose
and
then
gets
utilized
in
memory,
and
then
it's
retired,
so
there's
a
sophistication
in
in
the
use
case
and
and
also
the
hierarchy
that
we
enable
by
our
key
management
agents.
I
B
E
No,
I
mean
so
we
don't
expose.
We
don't
expose
a
kubernetes
view
to
our
end
users,
so
so
service
accounts
never
come
into
play.
So
happy
accounts
are
only
controlled
by
controlled
by
us.
The
the
customers,
the
customer
containers,
run
at
a
very
low
privilege,
so
they
they
don't
have
any
access
in
the
in
the
cluster
except
you
know
to
talk
to
each
to
talk
to
each
other
and
send
and
receive
traffic
from
outside.
C
E
Another
customization
we
have
done
in
kubernetes
at
the
edge
is
is
a
debug
solution,
so
so
it
is
very
important
for
the
users
to
be
able
to
debug
their
containers
at
runtime
problem
is
particularly
compounded
because
of
the
geographical
distribution
of
the
edges,
so
we
have
written
wrappers
around
existing
kubernetes,
logs
and
exec
apis.
E
Logs
and
exit
kps
that
are
tied
to
our
notion
of
workload
to
provide
a
multinet
view
for
debugging
containers.
E
So
so
any
any
of
our
customer
can
just
go
to
our
core
and
debug
their
application
like,
even
though
the
application,
even
though
the
workload
is
geographically
disputed,
they
can
still
see
a
single
view
of
the
application
and
and
ddb
we
debug
it
like
it.
Like
any
other
cloud.
C
E
No,
so
so
so
we
have
other
pipelines
to
fetch
the
locks
from
the
to
facial
locks
from
the
customer
containers,
but
the
debug
is
initiated
by
the
user.
So
so
you
you
enter
a
debug
mode
and
and
then
and
and
and
then
we
talk
to
all
the
edges
to
all
the
edges
in
scope.
For
that
workload
to
initiate
the
rework
and
then
you
you,
you
can
just
pick
whichever
location
you
want
to
debug
and
drop
into
a
shell
or
view
logs
or
do
whatever
you
want.
F
G
F
E
E
E
F
Yep
and-
and
maybe
this
would
have
been
a
good
place
to
to
have
started,
but
we
wanted
to
kind
of
jump
into
the
the
technical
content
and,
as
we
talked
about
just
a
moment
ago,
we're
not
we're
not
a
kubernetes
company.
In
that
sense,
you
know
we're
not
offering
kubernetes
as
a
service.
F
What
we're
solving
is
application
distribution
of
containerized
micro
services
right
so
the
the
docker
and
containerized
micro
service
format
is,
you
know,
has
clearly
dominated
and
we
think
that's
a
great
way
to
ship
containers
without
carrying
along
the
weight
of
virtual
machines
and
so
on.
So
and
and
kubernetes
is
a
beautiful
platform
for
orchestrating
those
those
micro
services.
It
does
a
lot
of
things
that
that
we
really
like
and
and
and
provides
us.
You
know
shoulders
of
giants
to
stand
on
in.
F
In
that
sense,
any
time
we
find
a
limitation,
then
we
look
for
solutions
within
the
the
the
the
the
technology.
That's
that's
that's
provided
out
there
and
when
we
don't
find
it,
maybe
it
doesn't
exist
or
maybe
we
just
didn't
find
it.
But
if
we
don't
we
build,
you
know
we
build
what
we
need
to
customize
the
the
platform
and
and
and
meet
what
we.
F
What
we
perceive
are
the
or
the
user
needs,
and
we
want
to
be
plugged
into
the
community
and
that's
a
that's
a
big
reason
for
this
presentation
to
to
this
organization,
and
we
really
appreciate
the
feedback
we
get
so
that
we
can
learn
because
we
certainly
don't
know
everything,
but
we've
we've
kind
of
built.
A
lot
we've
jumped
in
the
team
here
is
completely
fearless
and
built
something.
That's
up
and
running.
We've
got
some.
F
You
know,
people
kicking
the
tires
trying
it
out
if
there
are
things
that
we've
built
that
seem
unique
or
useful
to
you.
You
know
we'd
love
to
love,
to
love,
to
talk.
B
F
You
open
sourced
the
things
you're
building.
We
have
not
open
sourced
anything,
yet
we're
a
small
team
and
we
we
have
in
the
past.
Members
of
our
of
our
engineering
team,
have
in
the
past,
worked
in
open
source
communities,
and
so
we
understand
a
few
things
about
that
and
and
the
cost,
and
I
think
at
the
at
the
appropriate
time.
We
will,
because
that
benefits.
F
C
I
guess
it'd
be
interesting
to
hear
to
know
if
you
have
thought
about
this
kind
of
balance
between
where
things
where
things
are
potentially
generically
useful
versus
the
ease
you
have
found
in
in
using
crds
to
simply
kind
of
build
the
custom
thing
that
you
need
to
fit
your
needs.
In
other
words,
I
don't,
I
think.
C
Sometimes
we
are
always
looking
for
perfect
reusability
or
perfect
generalization
when
the
truth
is
there
are
enough
specifics
in
some
use
cases
that
the
the
lighter
weight
path
is
just
to
write
something
custom
and
that's
what's
kind
of
beautiful
about
crds.
Is
they
they
do?
Allow
you
to
do
that
without,
without
necessarily
everything
having
to
be
generic.
But
if
you
do
you
find
that
I
mean
this
is
a
very
kind
of
loose
question
but
like
where
have
you
found
that
balance
between
trying
to
be
reusable
versus
just
addressing
the
need
at.
F
So
if
we,
if
we
look
back,
there's,
maybe
some
of
the
networking
the
networking
piece,
maybe
but
I'd
love
to
get.
F
F
Yeah
yeah,
yeah
and
and
to
others
running
you
know
multi
workload
or
multi-tenant
in
inside
kubernetes.
This
could
be
useful.
It'll
take
a
little
investigation
to
understand
how
to
how
to
plug
it
into
the
community
and
and
preston.
If
you
have
advice
on
how
to
best
approach
the
the
kubernetes
community
or
anyone
on
the
call,
but
how
what
what's
the
best
path
in
if
we
have
something
that
we
think
is
contribution.
C
C
C
If
you
have,
if
you
have
further
control
at
the
operator
level
of
the
cluster
that'd,
be
that'd,
be
interesting,
but
where
you
can
kind
of
converge,
where
there's
already
some
annotations
being
developed
around
cni
and
pods
for
ingress
and
egress,
then
this
would
be
interesting,
but.
A
H
C
I
I
I
C
Yeah,
I
think
that
one's
also
really
interesting,
and
I
I
haven't,
really
done
a
ton
of
hunting
around
for
for
what
else
is
out
there,
but,
like
I
said,
I
know
that
when
we're
looking
at
things
like
staging
firmware,
updates
or
any
any
sort
of
staging
of
content
like
lookup
databases
or
any
anything
like
that
is
sort
of
a
it's,
it
has
some
similarity
to
what
cdns
must
do
as
far
as
distributing
content
for
staging
locally.
But
in
this
case
it's
not
just
for
serving
it's
for
use
by
microservices
running
on
the
edge.
E
B
Great,
if
that's
it,
thank
you.
I
I
think
we
should
note
that
next
week
is
cubica
kubecon,
north
america
and
a
number
of
us
are
going
to
be
there.
So
if
some
people
on
this
call
weren't
aware
of
that,
I'd
encourage
you
to
go
to
the
edge
iot
sessions
and
maybe
we'll
try
to
get
a
informal
face-to-face
meeting
going
at
some
point.
F
Yeah,
I
don't
think
anyone
from
from
around
the
table
here
is
going
to
be
there,
unfortunately,
not
this
time,
but
appreciate
it
all
right
so
again,
if,
if
there
are
any
further
questions,
anybody
wants
to
kick
the
tires
you
can
reach
out
to
us
at
eval,
rapha.co
I'll
be
happy
to
continue
the
discussion.
F
Thank
you
for
your
time.
I
don't
know
if
there
are
any
other
agenda
items
for
for
today,
but
we're
happy
to
turn
it.
A
Over
so
I
don't
have
any
other
agenda
discussions,
maybe
just
we
can
continue
working
around
organizing
about
the
activities
for
the
next
week.
So
maybe
we
can,
just
you
know,
leave
it
with
the
people
who
are
interested
in
into
that
topic.
A
J
I
have
a
quick
thing.
That's
I
don't
think
needs
to
be
part
of
a
large
group.
Yet
if
I
can
have
30
seconds,
yeah
go
shooting
how
tapped
in
is
this
group
to
the
planned
not
publicly
discussed
but
private,
but
not
nda
and
private?
Being
privately
discussed
the
linux
foundation's
plan
to
organize
all
of
the
edge
projects
under
an
umbrella?
J
Yeah,
I
I
think
it's
so
our
pete
is-
is
driving
it
internally
at
lf
and
as
the
rep
of
the
open
glossary
of
edge
computing,
I'm
having
a
lot
of
those
discussions,
because
that
sort
of
spans
all
of
the
edge
projects,
but
it's
edgex
foundry
on
f
cncf,
at
least
part
of
it.
I
don't
know
what
they're
going
to
do
with
it:
it's
not
all
edge
related
and
so
on,
and
it's
going
to
be
a
pretty
big
deal.
J
I
think
so
this
this
working
group
could
play
a
pretty
big
role
in
how
kubernetes
is
messaged
into
that.
I
can
ask
carpet
if
he
would
want
to
join
one
of
these
calls
and
give
a
10-minute
presentation.
This
is
going
to
be
probably
late
q1.
So
it's
nothing.
We
need
to
rush
on,
but
is
this
what's
the
group's
appetite
for
for
engaging
with
that
proactively
and
I'm
happy
to
liaison
just
as
a
matter
of
being
involved
in
all
these
things
tangentially
anyway?
J
But
if
this
group
wants
to
be
more
actively
involved
as
a
group,
what's
the
appetite.
J
So
so
it's
it's,
it's
two
things
it's.
It
is
primarily
a
a
umbrella
of
other
linux
foundation
projects
but
they're
also
introducing-
and
I
don't
know
if
they've
figured
out-
how
exactly
it's
gonna
work
but
they're,
calling
it
an
associate
membership
and
they're,
offering
it
to
other
sort
of
non-profit
open
sourcing
groups
to
join.
So
we've
already
talked
to
the
tia
telecom
infrastructure
association,
infrastructure,
masons,
open
fog,
consortium.
You
know
a
bunch
of
those
folks
that
I
end
up
talking
to,
because
the
open
glossary
is
kind
of
the
I
spent.
J
J
J
It's
astounding
and
I've
even
heard
I've
even
heard
rumors
that
they're
looking
at
doing
hardware
too
so
jim
jim's
ambitions
are,
are
yeah
impressive.
Anyway,
I
like,
I
said,
there's
no
rush.
It's
going
to
be
probably
late.
Second
quarter,
I'm
again
I'm
happy
to
bring
the
news
back
and
forth,
but
if
this
group
wants
to
participate
or
individuals
in
this
group
want
to
participate
more
actively,
just
let
me
know-
and
I
can
try
to
connect
those
dots.
B
J
Yeah,
I
think
the
primary
opportunity
is
when
the
linux
foundation
goes
to
message
around
edge.
I
think
this
working
group,
whatever
it
becomes,
whether
it
stays
a
working
group
or
becomes
a
project
and
I'm
not
sure
what
the
plans
are,
but
I
think
it
should
be
positioned
very
strongly
and
should
should
influence
the
messaging,
and
that
would
be
my,
I
think,
the
primary
point
of
intersection.
B
B
It
really
is
rather
difficult
to
keep
track
of
all
this
and
I've
been
to
conferences
where
you
have
people
who
really
are
focused
on
edge
iot,
who
come
to
cubicon
and
maybe
at
least
half
that
conference
really
is
of
very
little
interest
to
them.
So
having
something
that
approaches
it
from
a
use
case,
perspective
might
be.
J
B
B
Thanks,
I
think
some
of
us
are
still
working
on
our
presentation
decks
for
kubecon
next
week,
so
the
rest
of
you
you're
welcome
to
listen
in
if
you
want,
but
I
think
we're
gonna
use
the
next
five
minutes.
Maybe
to
talk
about
that.
B
D
Yeah
I
have
to
boogie
as
well
thanks
everybody
and
have
a
nice
weekend.
B
Take
care
anyway,
deep
dive
deck.
Just
before
I
go,
I
did
add
a
couple
slides
one
is
just
the
the
speakers
and
I
cut
and
pasted
people's
icon
right
out
of
the
the
kubecon
scan
site,
but
go
go
obviously
edit.
Anything
you
don't
like,
then
I
I
added
one
slide
that
I
I
would
view
as
kind
of
an
intro
overview.
B
I'm
unclear.
I
think
that
same
presenter
slide
by
the
way
could
be
used
as
a
deck.
We
open
the
the
intro
and
the
deep
dive
since
it's
the
same
speakers
and
way
down.
I
think
it
was
slide
31,
maybe
32.
I
added
a
slide.
I
intended
to
be
a
close
where
we're
going
to
leave
the
audience
with
the
links
to
get
involved
in
the
group.
So
I
think
it's
good
when
you
finally
close
go
to
q
a
to
just
leave
that
up,
because
it's
convenient
for
people
to
snap
a
picture
of.
A
We
can
split
this
into
two
presentations
so
if,
if
if
the
flow
doesn't
really
fit
with
everything,
maybe
we
just
leave
this
presentation,
as
as
we
originally
planned,
with
the
workloads
and
and
a
couple
of
deep
dives
in
those
workloads,
and
maybe
cindy
can
can
use
like
10
minutes
of
the
intro
session
to
introduce
cue
badge
and
and
then
then
we
continue
with
the
panel.