►
From YouTube: Kubernetes WG IoT Edge 20210421
Description
April 21 meeting of the Kubernetes IoT Edge Working Group. Agenda: Introduction to OpenYurt, a CNCF sandbox project which extends Kubernetes for cluster deployments at edge.
A
Okay,
let's
get
this
formally
started.
This
is
the
april
21
meeting
of
the
kubernetes
iot
edge
working
group.
This
group,
like
all
other
groups
in
the
kubernetes
project,
abides
by
the
kubernetes
code
of
conduct.
The
summary
of
that
is
just
be
nice
to
each
other.
That
also
means
that
all
meetings
are
public
and
recorded
and
will
be
posted
on
youtube
after
some
latency
of
downloading
and
checking
out.
A
So
with
that
said,
the
recording,
I
think,
is
already
started.
Yes,
so
let's
get
going
on
the
agenda
today,
we
have
I'm
not
correct
me.
If
I
get
your
name
wrong,
faye
or
I'll,
let
you
say
your
name:
who's,
a
lead
on
the
open
year
project
and
he's
going
to
give
us
an
introduction
faye
just
so.
You
know
that
this
group
has
been
following
some
of
the
other
kubernetes
for
edge
solutions
for
some
time,
principally
cube
edge.
A
B
Yeah,
I
understand
so
I
I
I
don't
put
the
direct
comparison
in
the
presentation
slides,
but
this
won't
go.
I
mean
this.
I
may
take
20
to
25
minutes
so,
but
in
the
rest
of
in
the
qr
qna
sessions
we
can
discuss
the
difference
between
this
open
yard
and
qbh,
but
I'm
guessing
during
the
presentation
you
guys
probably
will
figure
out.
What's
the
difference
anyway,
so
yeah
yeah,
I
understand
so
anything
else.
Should
I
start
or.
C
B
B
Yeah
good
morning,
everyone
today,
I'm
going
to
give
you
a
brief
this
this.
This
describe
a
description
about
the
open
yard
project
which
I
was
working
on
in
alibaba.
My
name
is
faye.
I
come
from
alibaba
group,
okay,
let's
get
started,
so
this
is
the
agenda
of
my
today's
talk.
I
give
you
a
brief
background
about
this
project
and
in
a
kind
of
edge
compute
in
general,
and
I
will
discuss
some
challenges
that
we
face
when
we
try
to
do
the
edge
computing
support
in
the
kubernetes.
B
I
will
give
you
some
high
level
design
about
open
yard.
How
do
they?
How
does
it
resolve
those
challenges?
And-
and
I
give
a
kind
of
short
description
about
the
use
case
where
this
project
and
where
this
solution
was
used
for
and-
and
I
think
in
the
rest
of
time
we
can
have
some
qa
and
answers
any
questions.
You
guys
have
okay,
so
edge
computing
in
general,
so
I
guess
you
guys
are
quite
familiar
with
this
diagram.
So
basically
we
are
trying
to
resolve
the
use.
B
Cases
like
the
workloads
was
running
the
edge
and
the
workload
typically
runs
the
disk
distributed.
Algorithms,
the
the
characteristics
of
the
workload
is
that
the
local
data
is
is
what
was
was
generating.
The
edge
that
was
was
the
input
and
the
in
usually
the
the
edge
application
will
do
some
analyze
or
data
data,
mining
or
or
machine
learning
trading
work
in
the
edge
node
and
the
results
will
be
sent
back
to
the
center,
which
is
in
the
cloud
and
they
typically
rose.
B
Workload
was
managed
in
the
cloud
by
the
center
central
place,
which
is
the
cloud
the
drive.
The
driving
force
of
having
this
architecture
is
due
to
a
few
reasons,
like
you
know,
low
latency,
lower
bandwidth
between
the
edge
and
the
cloud
and
some
kind
of
autonomy,
because
in
the
edge
in
edge
the
you
do
various
network
connection
problems
or
or
maintenance
problems
they
may
sometimes
get
disconnected
from.
B
We
need
to
maintain
kind
of
autonomy
without
the
central
control
plane,
and
then
the
edge
still
can
handle
some
some
kind
of
arrows
or
like
node
reboots
things
like
that,
and
another
thing
is
very
important,
which
is
privacy
and
the
security.
So
many
people
don't
want
to
send
all
the
data
generated
in
the
answer
to
the
center
because
they
are
you
know
over
there.
They
are
conscious
about
or
they
are
concerned
about
leaking
the
information
from
those
many
data.
B
That's
the
reason,
so
I
think
edge
computing
is
getting
more
and
more
promising
in
solving
kind
of
real
use
cases,
so
in
general,
so
the
cloud
edge
computing
is,
you
know
a
platform
which
is
is
a
platform
that
unites
the
cloud
agiles
and
iot
devices.
In
this
you
know
in
this
architecture
the
cloud
typically
is
a
centralized
control,
plane
and
edge
is
a
con
is
general
is
composed
of
the
computer
nodes
that
are
closer
to
devices
or
data,
and
but
device
are
the
actually.
B
The
entry
point
to
the
remote
environments
is
essentially
the
data
generators
which
which
are
which
are
sent
as
an
input
for
the
application
running
in
the
edge
nodes
yeah.
This
is
kind
of
typically
the
architecture
we
are
trying
to
tackle
in
the
in
the
open
yard.
B
Yes,
just
just
give
a
brief
this
this
description
about
the
kubernetes.
I
think
everybody
is
quite
familiar
with
it
in
general.
Is
it
is
a
you
know,
it's
a
container
orchestration
platform
which
which,
which
is
defacto
the
people,
are
thinking
this
as
a
cloud
operating
system.
It
has
been
used
in
many
many
use
cases
and
almost
major
all
major
cloud
vendors
has
support
for
the
community
services.
B
I
think
the
nice
thing
about
command
is
it.
Is
it
has
a
unified
user
experience
that
abstracted
away
all
the
underlying
architectural
ice
layer,
details
and
people
can
use
the
same
interfaces
apis
yamls
and
apply
the
workload
in
almost
any
environments,
and
another
good
thing
on
the
kubernetes
has
a
very,
very
strong
ecosystem,
because
it
has
a
very
excellent.
Very
you
know,
good
extensibility
by
by
leveraging
the
crd
capabilities
yeah.
I'm
just
give
a
very
quick.
B
B
I
think
kubernetes
in
general
from
architecture
wise
and
it
kind
of
fits
the
the
edge
use
cases
and
architecture
because
it
it
it
is
essentially
are
you
know,
a
distributed
system?
So
so
so
so,
if
you
look
at
this
mapping,
so
the
kubernetes,
the
you
can
just
put
the
kubernetes
masters
in
the
sense
that
the
api
server,
scheduler
controller
controller
manager,
you
put
rose
part
as
a
control,
control,
plane,
partner
and
put
it
in
the
cloud.
B
B
You
have
all
sorts
of
you
know
edge
side
devices
like
camera
sensor,
cars
hospital
factory,
then
that
those
devices
connect
to
the
nodes
in
the
edge,
and
you
you,
if
you
show
some
dash
on
here,
the
dash
line
typically
means
you
know
the
the
the
the
nodes,
the
the
one
cluster,
the
kinetic
the
nodes
in
one
cluster,
maybe
those
two
different
regions.
The
reasons
is
kind
of
you
can
see.
B
Those
reasons
like
you
know,
factory
like
a
building,
so
they
have
some
kind
of
local
affinity,
kind
of
thing
find
together.
So
this
is
pretty
common
in
this
in
this
kubernetes
edge
architect,
and
we
we
we
hope.
We
hope
we
have
some
solutions
and
can
optimize
this
scenario
such
that
the
nodes
itself
has
a
kind
of
local
local
affinity,
kind
of
features
yeah.
So
next,
I'm
going
to
describe
some
challenges.
If
you
want
to
use
kubernetes
to
manage
the
cash
use
case,
the
edge
use
cases.
B
What
are
the
challenges
we
face
at
least
three
challenges
which
are
addressed
in
open
yard.
I
also
will
also
list
some
other
challenges
that
have
not
been
addressed
and
which
are
also
our
future
work.
The
first
problem
is
on
reliable
network,
so
it
is
pretty
much
is
true
so
in
in
our
in
when
we
talk
to
the
edge
customers
that
use
ia
solution,
they
sometimes
they
over
ask
saying
that
the
networking
connection
between
the
centralized
contributor
and
the
edge
may
not
be
always
on.
So
that's
one
constraint,
but
the
biggest
problem
is
that
kubernetes.
B
This
is
not
a
correct
assumption
in
kubernetes,
kubernetes
assumes
that
kubernetes
always
has
a
connection
with
the
abs
server.
Otherwise,
if
you
don't
anything,
if
we
don't
do
anything,
if
you
just
cut
off
the
the
nodes
and
the
api
server,
if
you
know
the
restarts,
none
of
the
parts
will
be
recreated
by
the
kubernetes.
So
that's
the
limitation
of
the
kubernetes
architecture.
B
Another
problem
of
having
a
reliable
network
is
that
the
kubernetes
api
server
relies
on
the
harvest
to
detection
of
node
offline,
and
so
you
guys
know
so
there
is
no
controller
which
we
are
evict
parts
if
that,
if
the
node
is
it
is,
has
been
detected
as
offline,
which
is
a
good
feature
to
support.
You
know
higher
availability
for
the
application,
but
in
edge
use
cases.
This
is
this
is
a
problem.
B
So,
even
though
your
applicant
is
still
running
a
node,
but
if
you
cut
off
the
connection
between
the
node
and
the
eps
server,
the
api
server
via
starts
to
evict
the
parts
from
the
server
side.
Yeah,
that's
one
problem.
Another
problem
is
the
unidirectional
network,
so
usually
the
agile
resides
in
the
internet
so
which
you
can
access
the
public
eyepiece
through
the
s-net,
but
the
outward
component
typically
cannot
access
it
in
visual
nodes
within
the
internet.
B
So
so
this
is
the
and
the
problem
of
this,
this
unidirectional
network
is
that
some
kubernetes
api,
which
is
very
important
for
competitive
users
for
do
debugging
or
auditing,
such
as
the
login
excc
apis,
cannot
be
supported
in
in
this
networking
scenario,
so,
which
is
another
problem
that
we
are
trying
to
resolve,
the
the
last
one
is
the
poor.
B
While
the
pull
is
the
one
that
I
was
mentioning
in
the
previous
slide
that
some
some
nodes
that
can
form
a
pool
which
rely
on
the
same
factory,
same
building
kind
of
thing
or
same
hospital
in
the
sense
that
if
you
have
a
centralized
controller,
the
kubernetes
cluster
is
imagine
many
hospitals
we
so
we
are
trying
to
so
people
always
find
you
know
challenging
if
you
want
to
deploy
the
same
applications
across
different
pool
or
different
regions.
B
A
So
I
guess,
if
that
last
one
is
just
getting
at
the
point,
that
in
kubernetes
in
general,
any
pod
expects
to
be
able
to
reach
any
other
pod
in
the
same
cluster
right
and
you're,
deploying
at
edge
or
multiple
edge
locations,
potentially
breaks
that
and
you're
trying
to
address
that.
That's
what
you're
saying
right!
That's.
B
One
aspect:
another
aspect:
we
see
cases
that
the
people
trying
to
deploy.
Let's
say
you
know
one
one
class,
a
major
three:
three
hospitals:
three
hospitals
are
in
three
regions
same
set
of
software,
but
with
different
versions.
So
they
want
to
have
an
easy
way
to
manipulate
to
manage
this.
This
type
of
application
basically
spread
to
different
regions
with
different
versions
or
different
parameters,
kind
of
thing.
So,
usually
you
will
have.
Let's
say
you
will
create
three
deployments,
one
deployments
for
one
hospital,
but
this
solution
may
not
scale.
B
Well,
if
you
have
10
hospitals,
you
need
to
create
10
deployments,
so
they
so
we
we
introduce.
I
will
talk
about
later.
We
introduce
a
workload.
What
workload
controller
says
the
united
deployments.
Basically,
you
can
deploy
the
same
set
of
applications
to
different
regions
with
slight
differences,
so
you
use
one
controller
to
manage
the
lifecycle
of
multiple
copies
of
applications.
A
B
Yeah,
so
that's
another
part
of
the
part
of
the
poor
web
application
management
I
was
talking
about.
I
will.
I
will
give
more
details
later.
Another
two
challenges
that
I
think,
which
is
also
important.
The
first
one
is
the
resource
requirement,
so
in
the
but
I
I
don't
think
this
is
a
big
blocker
from
now,
because
nowadays
I
think
they
people
usually
have
no
decent
resource.
B
B
You
need
to
install
all
the
default
kubernetes
components
anyway
in
the
in
the
edge
side,
including
kubernetes,
proxy
cri
kind
of
thing
and
no
doubt
rose
daemons
are
not
optimized
for
resource,
so
which
means
they
consume
cpu
and
memory,
and
I
believe
there
are
a
lot
of
rooms
to
do
the
improvement
to
reduce
the
resource
consumption
but
anyway,
so
in
current,
in
current
state
of
art
departments,
cutting
the
resource
consumption
is
never
the
goal
of
the
kubernetes
objects.
B
As
far
as
I
know,
so
they
they
have
the
some
kind
of
resource
requirements
in
the
edge
in
the
side,
which
I
think
the
worst
problem,
or
were
some
people's
concern
because
they
said,
oh,
my
nose
are
running
all
these
apps
only
have.
I
said
two
cores
four
gig.
I
cannot
so
I
I
cannot
afford
a
solution
that
takes
too
much
overhead.
So
that's
one
thing
I
think:
that's
the
cam
plays
a
challenge
to
leveraging
kubernetes
to
manage
the
casual
edges
use
case.
B
Another
thing
is
the
device
management
which
is
another,
which
I
think
is
a
strong
aspect
of
kubernetes
anyway.
But
to
me
I
think
this
is
also
challenging
because
there
is
no
standalone,
moda
and
abstract
abstraction,
for
this,
I
would
say,
is
highly
opinionated
in
terms
of
the
api
design
and
the
real
implementation
about
the
controller
supporting
this
we
are
trying
to
in
in
the
coming
up
slides.
B
So
we
will
talk
about
what
is
our
way
to
resolve
this
device
management,
but
I
do
think
this
is
steer
and
challenging
because
you
there
are
too
many
types
of
devices
these
too
many
differences
of
requirements
of
running
this.
I
would
say
that
managing
roads
devices
in
kubernetes
is
quite
challenging
yeah.
So
this
is
the
challenge
that
we
see
in
terms
of
the
managing
the
edge
use
cases
using
kubernetes.
B
I
will
not
next.
I
will
talk
about
some
design
that
the
open
yard
was
trying
to
use
our
principle.
Our
design
principle
is
kind
of
follow
this
way
it's
like
we
are
trying
to
extend
in
the
kubernetes
without
intrusive
modification,
which
means
we
are
trying
to
keep
all
the
kubernetes
components
as
as
unpacked
you
know,
as
native
as
possible,
so
which
also
means
we
are
trying
to
maintain
full
capability
with
upstream
communities.
B
This
is
that,
so
this
is
a
kind
of
the
key
differences
between
our
solution
versus
kobe
edge
well
kubej,
at
least
in
the
previous
or
other
version
I
was
looking
at,
so
it
kind
of
changed
the
kubrick
quite
a
lot.
We
are.
We
don't
do
that,
so
we
we
do
everything
like
additive.
So
it's
not
not.
B
We
don't
do
intrusive
code
modification
to
existing
components
to
so
obr's
trying
to
support
the
edge
characteristics
like
autonomy,
we're
trying
to
resolve
the
cloud
edge
communication
problem,
we're
trying
to
provide
the
poor
wire
management
and
the
so
opinion
are
also
trying
to
support
a
heterogeneous
computer
like
even
computer
architect,
arms
x86
and
arms
are
connected,
but
this
is
yeah.
This
is
the
thing
that
the
main
thing
that
we're
trying
to
resolve
we're
trying
to
make
you
know
could
be
opening
out
as
kubernetes
native.
B
So
we
generally
just
implement
a
songs.
We
keep
opening
up
with
upstream
release,
so
every
I
think,
every
quarter
we
will
have
a
kind
of
virtual
version
upgraded
to
the
kubernetes
base
base,
kubernetes
versions.
We
are
trying
to
achieve
100
api
compatibility
using
open
yard
yeah.
This
is
our
design
principle
for
this.
B
This
is
the
high
level
architect
about
the
open
yard.
So
in
the
cloud
side,
they're
nothing
but
just
a
bunch
of
controllers.
It
has
its
own
node
controller
to
replacing
the
upstream
node
controller.
Just
because
we
are
trying
to
resolve
that.
You
know
pod
eviction
problem
during
if
the
node
losing
habits
we
have
to
slightly
change
that
behavior.
This
is
unfortunate.
B
We
we
have
introduced
a
new
united
department
to
resolve
the
poor
management
problem
and
somehow
some
kind
of
in
one
respect
that
I
was
mentioned
about
the
application
management.
B
The
cloud
side
also
has
a
tunnel
server
which
is
trying
to
resolve
the
cloud
and
the
edge
know
the
communication
problem,
because
they
probably
are
placing
different
to
resolve
that
the
uni
directional
access.
So,
as
you
know,
can
access
cloud,
but
the
cloud
cannot
access
edge
node
so
to
resolve
that
problem,
we
introduce
a
tunnel
service
so
that
the
cloud
aps
server
can
can
call
the
complete
api
through
this
terminal
in
the
node.
In
another
side,
we
have
a
a
few
new
components
introduced.
The
major
one
is
the
jet
hub,
which
you
can
use.
B
You
can
treat
it
as
a
proxy
to
proxy
all
kinds
of
requests
from
the
node
component
to
the
api
server,
so
you're,
not
sure.
If
the
node
is
offline,
the
node
is
disconnected
to
the
api
server
and
the
yard
hub
will
will
kind
of
handle
all
the
aksu
requests
from
node
components,
so
so
that
no
components
still
work,
even
though
the
restarts
yeah
yeah.
So
this
is
pretty
much
the
high
level
architecture
about
the
open
yard.
A
Your
diagram
that
flannel
is
specifically
called
out
as
the
cni.
Is
that
a
requirement
or
can
you
use
other
cni's.
B
B
A
B
Okay,
so
let
me
briefly
talk
about
how
do
we
resolve
the
edge
economy
so
edge
tournament?
So
the
goal
of
this
of
resolving
this
problem
is
that
the
application
availability
is
still
preserved
when
the
cluster
edge
networking
is
off.
So
the
high
level
idea
is
that
I
think
it
could
be
a
could
be
added
using
a
similar
idea
is
like
we
will
cache
the
cloud
data
locally
so
and
this
one,
so
we
cache
we
cache
the
cloud
cloud
data
locally.
B
Second
thing
is:
we
are
trying
to
retain
that
ipo
mac
address
when
part
is
recreated,
but
this
requires
some
cry
support.
So
that's
the
reason
I
was
mentioned
flat.
I
believe
we
have
some
enhancement
in
finals.
I
have
to
actually
support
this,
but
you
know
I,
I
don't
think
we
we
we
opened
all
this
in
the
upstream
one,
because
I
don't
I
don't.
I
don't
know
how
many
people
were
fought.
You
know
doing
changes
on
the
finalities
for
this,
but
I
do
see
people
just
keep
asking.
B
Can
we
keep
the
same
ipa
if
the
node
restarts,
but
this
is
we
are
trying
to
resolve
this
type
of
problem,
but
another
problem
is
caching,
which
is
definitely
reopen
sources
in
the
sense
that
if
the
we
will
catch
all
the
all
the
cloud
information
whenever
it
is
updated
and
not
exactly
whenever
it
is
updated
whenever
you.
So
let
me
try
to
explain
the
workflow
of
this.
B
So
if
you
look
at
the
diagram,
so
any
any
people,
so
any
node
components
so
trying
to
we
inject,
we
kind
of
intercept
all
the
network
requests
to
a
part
number
10261,
which
is
the
api
so
every
time
so
the
the
the
node
component
trying
to
access
api
server.
The
yahoo
will
will
see
if
that
this
ap
server
is
online
or
not.
B
If
this
adversary
is
online,
which
means
our
so
then
the
request
will
will
be
directly
sent
to
the
api
through
a
load
balancer
and
the
the
request
results
will
be
cached
locally
in
the
caching
manager.
So,
if,
if
anytime
for
any
time
the
the
the
yacht
hub
find
that
the
api
server
economy
is
offline,
so
it
will
serve
the
same
api
request
by
looking
at
the
cache
manager
and
and
and
and
to
to
satisfy
the
request.
B
Basically,
if
you
trying
to
list
the
parts,
if
that,
but
the
note
is
offline,
so
the
yacht
hub
will
reply
with
the
paths
list
that
it
catches
from
the
last
latest
one.
So
that's
completely
the
way
how
it
does
yeah.
I
think
this
is
a
common
way
that
people
are
trying
to
resolve
the
problem
that,
if,
if,
if
the
node
is
offline
and
how
do
we
satisfy
the
apso
request
for
the
node
daemons
running
in
the
node
yeah,
this
is
the
edge
autonomy
solution.
B
B
A
If
you
can't
reach
the
control
plane
at
the
time,
yes,.
B
Okay,
so
thank
you,
yeah
sure.
That's
the
typical
way
that
people,
don't,
I
think,
okay,
I
just
support
the
same,
so
they
use
the
kind
of
database
sql
server
database
for
this
purpose.
So
we
we,
we
are
not
doing
that.
You
know
fancy
we're
still
using
a
a
smaller
that
text
based.
You
know,
file
file
organizations
for
the
for
the
data
storing
the
disk,
so
the
second
one
is
the
quality
edge
communication.
B
So
the
goal
of
this
that
we're
trying
to
steer
for
any
reason
if
even
if
the
aginos
are
placed
in
the
internet,
we
still
need
to
pry
it
away,
so
so
that
aps
server
in
the
cloud
can
access
couplet,
calling
the
login
and
escc
apis
in
the
edge
there
are
so
in
general.
Just
people
trying
to
just
introduce
a
tunnel
services
tunnel
is
pretty
common
to
resolve
this
problem,
and
the
only
difference
is
that
we
are
not
implementing
an
in-house
tunnel
services.
B
So
we
we
work
in
the
upstream,
so
we
talked
with
upstream
guys
saying
that
we
are
trying
to
use
amp
our
edge
solution,
but
we
have
one
requirement
such
that
we
need
to
have
a
routing
policy
so
to
see
which
exactly
these
requests
are
brought
into,
because
the
native
the
native
amp
can
route
the
resource
to
any
amp
agent
in
in
node,
but
we
do
require
to
go
to
exactly
just
one
of
them.
B
So
we
added
this
routing
policy
is
the
amp
and
I
think
this
feature
was
merged
and
it
was
supported
in
upcoming
amp
release
and
another
thing
is
amp
I
think
was
supported
after
one
kubernetes
118.
We
also
trying
to
support
this
in
for
previous
versions.
That's
the
reason
we
have
an
interceptor
in
the
diagram
which
does
nothing
but
trying
to
convert
the
old
version
request
to
the
new.
I
think
it's
it's
grpc
or
just
http,
2
kind
of
form.
B
I
I
I
forgot
the
details
trying
to
kind
of
convert
the
the
request
from
the
actu
and
the
kubernetes
request,
from
old
version
of
webcam
to
the
forward
to
the
format
that
the
mp
supports,
so
so
yeah.
So
that's
pretty
much
what
we
did
for
the
supporting
the
tunnel
service.
So
even
if
even
if
it
knows,
as
you
knows,
are
placing
in
the
internet,
so
you
guys
still
can
access
it.
B
The
the
next
one
is
pool
well
application
management
yeah.
It's
essentially
does
by
two
crd.
One
is
notebook
controller,
another,
the
another
one
is
a
united
deployment
controller,
so
notable
controller
is
pretty
simple:
why
it's
just
trying
to
define
a
set
of
nodes
which
forming
a
pool.
So
the
controller
will
do
something
like
that.
B
You
can
management
the
all
the
label
annotations
or
some
of
the
nodes
of
the
nodes
that
belongs
to
node
pool
using
this
controller,
so
so
that
you
can
easily
so
just
adding
the
the
labels
annotations
kind
of
thing
to
all
these
pools
in
one
pool
and
the
united
departments
kind
of
walk
with
the
node
pool
so
that
we
will
kind
of
deploy
a
similar
kind
of
departments
in
different
north
pole
with
slightly
differences.
B
A
B
Yeah,
so
so
I
for
now
we
don't
have
sets
directly.
You
know
it's
all
up
to
users
for
now,
so
we
don't.
We
just
give
some
brass
best
practice,
saying
that
you'd
better
put
the
notes
in
the
same
location
in
north
pole,
because
because
we,
when
we
talk
about
the
networking,
so
we
we
are
trying
to
make
the
networking
in
the
local
pool
is
connectable.
So
we
are
trying.
So
that's
the
reason.
A
B
Thanks
exactly
so,
if
you
look
at
these
diagrams,
you
can
say
poor,
a
is
one
hospital
proven
is
another
hospital,
so
indeed
in
different
hospitals,
we
give
different
departments
even
for
the
same
workload
and
and
our
so
now
so
usually
so
even
you
have
same
services,
but
if
the
ending
point
are
spreading
to
different
pools
and
the
the
usually
the
narrower,
the
the
the
narrow
traffic
we
are,
we
are
we
are
sent
to
the
endpoint
that
belongs
to
the
same
pool,
so
we
are
trying
to
avoid.
You
know
cross-pool
network
traffic.
B
This
is
so.
This
is
kind
of
done,
if
you
remember
so
as
well
support
the
topology-wise
service,
it's
kind
of
that
that
that
that
that
just
to
work
pretty
well
with
that,
you
know
api
change,
so
you
so
basically
the
community
service.
Now
you
can
specify
the
topology
where
the
ending
point
should
be.
You
should
be
trying
to
a
way
to
do
the
cross
topology
traffic
yeah.
So
the
pooling
is
also
have
to
configure
that
you
know
region
for
the
network
traffic,
so
basically
the
east
west
traffic
boundary.
B
We
can
form
through
the
pool
yeah,
okay,
so
yeah.
Another,
the
last
one
is
the
device
management,
so
we
are
trying
to
use
so
we
are
still
trying.
B
We
are
we're
also
trying
to
support
the
device
management
in
open
yard,
but
we
probably
do
in
a
different
way
we,
the
goal
is
we
try
to
manage
different
edge
devices
with
various
com,
communication
particles,
and
we
we
try
to
provide
a
way
that
users
can
manual
devices
like
managing
normal
kubernetes
objects
through
the
cres
we
so
so
we
don't
do
kind
of
we
kind
of
leverage,
the
existing,
the
the
the
edge
device
management
system
plan
or
even
platform.
B
So,
let's
we
we
only
tried
we
so
in
open
yard,
we
only
define
a
kind
of
a
set
of
apis
and
crds.
Then
those
crds
can
work
with.
We
hope
can
work
with
many
kind
of
very
specific
device
edge
device
management
platform.
You
can
see
this
as
an
interface
layer
or
glue
layers
to
connecting
different
edge
device
platforms.
B
The
3d
crd
is
a
device
profile
which
kind
of
represent
different
types
of
devices
and
just
describe
this
device
profile
is
usually
provided
by
the
wonder,
describing
okay,
what
this
device
is
and
what
are
the
api
or
the
ui
url
or
all
the
resources
these
devices
can
can
be
accessed
through
the
the
vendor's
drivers
and
the
devices
and
the
device
services
is
a
crd
that
can
define
the
way
that
device
connect
to
the
cluster.
A
So
so
I
I
just
want
to
make
sure
I
got
this
straight:
is
open
yard
opinionated
on
defining
a
way
to
manage
devices,
or
is
it
open
for
plug-ins,
where
you
might
be
able
to
take
another
project
like
I
see
in
your
diagram
you're
using
the
stuff
from
ajax,
but
suppose
I
wanted
to
use
eclipse
hawk
bit
or
the
acry
project
are.
Are
those
something
you
could
use
with
this
or
that's?
That's.
B
That's
what
we
hope
so,
but
aquarium,
maybe
a
aquarium.
We
may
not
support
that,
because
that
is
slightly
different
than
the
traditional
edge
device
system,
but
for
edge
x
styles.
Basically,
you
have
a
platform
and
you
have
a
definition.
B
If
the
platform
is,
it
was
designed
dedicated
to
managing
devices,
and
we
hope
this
model
can
work
because,
usually
in
modern,
you
know
edge
devices,
you
you,
you
will
have
a
profile
anyway
to
describe
the
devices
you
support
and
you
need
to
have
a
kind
of
service
to
describe
where
you
can
access
those
devices.
So
this
is
all
the
high
level
requirements
and
it's
up
to
the
people
to
write
the
right
controllers
to
support
this
api.
This
is
our
hope
we
do
in
this
project.
B
B
Okay,
let
me
see
yeah
for
for
the
use
case.
Well,
currently,
the
open
yard
is
currently
in
production.
This
is
it's
a
core
components
of
the
ack.
It's
already
early
cloud,
kubernetes
and
edge
product
in
rd
cloud
in
current
use
case.
I'm
just
talking
about
the
current
use
case.
It
serves,
as
you
know,
I
call
it
a
passcode.
Basically
it
is.
It
is
a
foundation
for
various.
B
That's
that's
the
current
major
use
case
of
the
open
yard.
In
the
cloud
side,
we
do
have
collaborations
with
vml
for
the
edge
management
in
the
discipline,
ajax
foundry,
and
we
see
some
other
you
you
use
cases,
but
this
is
pretty
much
the
current
ones
yeah,
I
don't
so
the
the
community-wise.
So
we
do
we're
trying
to
enforcing
a
quality
release.
Since
it
starts
the
the
entire
project
starts
from
june
last
year
we
usually
set
up
the
six
months.
This
roadmap,
we
have
a
bioweekly
meetings.
B
I
know
we
have
a
collaborators
from
intel
vm,
the
other
group
edc
so
yeah
all
right.
So
that's
pretty
much
what
the
slides
that
I
have
today.
So
I'm
happy
to
answer
any
questions
from
you
guys.
C
B
Side
yeah
so
talking
about.
If
for
stephen's
question
talking
about
the
commercial
between
this
and
it
could
be
edge,
I
think
the
biggest
difference
is
we?
Don't
we
don't
make
any
change
to
existing
architecture?
So
we
don't.
We
don't
change
kubernetes
for
anything,
so
we
just
use
any
mode
we're
trying
to
stick
to
it.
It
may
not
be
optimal
for
the
resource,
but
I
think
from
a
maintenance
perspective
and
the
comfortability
perspective.
This
is
this.
This
is
a
good
at
least
to
resolve,
relieve
that
side
of
the
pressure.
A
So
I
suppose
the
implications
of
that
are
that
if
you
are
say
an
eager
adopter
that
has
been
waiting
for
the
very
latest
kubernetes
release
and
it
it
gets
published,
you
should
be
able
to
apply
open
your
to
it
and
get
get
it
running.
You
know,
within
a
matter
of
days,
rather
than
waiting
for
some
project
to
rebuild
and
exactly.
B
That
rebuild
components
you'd
need
to
replace
exactly
so
so
for
me,
for
now,
I
think
everything
still
it's
kind
of
simple,
it's
kind
of
this
kind
of
the
whole
idea
is
simple.
The
implementation
is
kind
of
simple
the
it
can
be.
So
we
even
have
a
you
know
conversion.
So
you
can
have
a
one-click
conversion,
converter,
normal
kubernetes,
console
to
edge
clusters.
A
A
How
many
different
variations
of
distributions
have
you
heard
of
people
successfully
using
this
on
top
of.
B
Yeah
but
any
data
that
we
get
are
in
production.
So
if
you
look
at
this,
it
is
you
know,
based
on
the
audit
cloud,
so
they
will
have
same
distribution
for
other
distribution.
My
only
concern
is,
we
have
some
components
that
has
a
version
dependency
like
the
amg
that
requires
you
have
so,
although
we
have
interceptor,
but
we
so
I
would
say
this
way,
I
didn't
we
didn't
test
for
all
the
distributions.
B
In
theory,
it
should
work,
but
at
least
we
are
trying
to
make
sure
it
should
work
for
the
latest
release
and
we
also
so
another
problem
is
another
problem
of
of
leveraging.
The
upstream
coordinates
is
that
upstream,
kubernetes
itself
is
changing
drastically.
For
example,
the
if
you,
if
you
are
aware
of
that
part,
topology
services
support
the
topology
service
support
which
was
added
at
118,
but
it's
getting
deprecated
is
121.,
so
we
so
then
which
bring
trouble
to
us.
B
So
if
we
build
a
solution,
you
know
based
on
that,
we
will
have
capability
issues
when
we
reach
121
right,
so
sometimes
we
we
decide
to
use
other
way
around
so
instead
of
using
that
api.
Let's
just
add
some
logic
in
the
yacht
hub,
so
we
kind
of
filter
the
the
end
point
that
you
can
see
because
yard
harbor
is
just
the
proxy
right.
You
can
filter
all
the
requests
that
you
are.
You
are
presented
to
the
kubrick,
so
we
can
use
yard
hub
to
filter
all
the
requests.
B
All
the
end
point
that,
outside
of
the
current
node
pool,
which
which
you
know
served
as
exactly
the
same
as
the
you
know,
having
the
api
support
for
the
service
topology.
So
this
kind
of
trick
that
we
we
sometimes
face
in
this
dilemma.
We
we
try
to
use
upstream
api
as
much
as
possible,
but
if
it
is
changing
drastically,
we
we
probably
will
hold
him
back
to
use
our
own
solution,
at
least
until
it
reaches
stable
stage.
A
Another
question
I
have
is
you
alluded
to
the
cni
being
involved
with
recovering
an
ip
on
the
on
an
angel
on
an
edge
node?
What
would
I
need
to
look
for
in
the
cni
to
determine
if
a
particular
cni
supported
that
functionality
or
not
or
or
do
you
have
a
list
published
in
the
docs
or
yeah?
So
that's
that's
a
thing
so.
B
A
So
you're
saying
the
cni
has
to
be
prepared
to
deal
with
a
local
ipam.
Yes
to
get
this
to
work
exactly.
B
Ip
change,
so
I
think
it's
all
about
application.
So
in
theory
the
cloud
native
application
shouldn't
care
about
ip
in
theory,
but
in
fact
many
people,
many
applications
that
are
migrated
from
the
on
prime
data
center
to
the
cloud
they
realize
on
the
ip
for
whatever
reason,
because
maybe
they
have
their
own
platform
to
do
the
management
they
need
to
know
the
ip.
B
A
So
what
of
the
current
user
base?
What
is
the
distribution
of
the
size
of
these
clusters
that
edge?
Are
they
typical,
multi-node
yeah
edge
or
it
does
this
thing
make
sense
if
I
had
very
low
resource
edge
nodes,
you
know
like
one
host
rather
than
multiple,
and
maybe
it
was
very
small
in
resource.
You
know
something
like
an
intel,
nook,
size
thing
or
maybe
even
a
raspberry,
pi
4..
Does
this
make
sense,
or
is
it
kind
of
overkill
for
those
kind
of
use
cases.
B
At
least
while
we
do
the
demo
we
use
the
raspberry,
we
use
that
as
more
restful
hall
to
do
the
demo.
So
it's
it's
supported
yeah,
but
in
in
reality,
at
least
for
the
audi
cloud,
the
edgy
users,
they
don't
have
very
strict.
They
don't
have
street
limitations
or
the
outside
node
resource.
They
typically
have
a
server
in
general,
they
have
a
certain,
and
at
least
they
have
white
server.
B
So
in
some
other
scenario
which
big
cigarette
sticker
camera,
we
do
have
a
cluster
support
in
the
hundreds
of
nodes
that
are
supposed
to
that.
That's
spread
to
different
regions,
even
the
pool
we
have
that
large
scales
like
like,
like
for
civilian
contacts,
that's
very
normal.
So
because
of
obsidian
you,
you
are
trying
to
dispute
the
syrian
distributed
networks.
The
u.s
involves
a
lot
of
not
nodes,
spread
to
everywhere
right
in
the
whole
country.
Probably
they
the
they.
They
trying
to
use
a
centralized
control
plane
to
do
the
management.
B
So
they
choose
this
opening
attitude
together
as
a
foundation
to
support
that
past
past
layer
yeah.
So
that's
a
reason
that
we
are
not
putting
very
strong
efforts
trying
to
reduce
the
footprint
of
the
node
components:
okay,
yeah.
A
B
A
Supernovas,
okay
and
then
I
guess
another
thing:
people
might
be
interested
in
what
kind
of
open
source
community
are
you
running
with
this?
Do
you
have
your
own
meetings
going
on
and
what,
if
someone
was
to
adopt
this
or
do
a
proof
of
concept,
where
would
they
go
to
get
support?
Do
you
have
a
slack
channel
or
some
kind
of
a
mailing
list,
or
how
would
they
go
about
it?.
B
C
B
So
I
think
this
will
come
to
the
table
for
once
once
we
have,
you
know,
let's
say
a
few
u.s
users
once
that
happens,
I
think
we
should
think
about
start
with
the
us
u.s
timeline
meetings,
but
mostly
for
now
the
major
users
of
the
open
yacht
are
coming
from
china
at
this
moment.
So
that's
the
reason
we're
hosting
the
meeting
for
asian
homestead,
but
any
people
can
just
post
requests
and
questions
in
the
github
issues.
We
are
very
responsive
to
that.
A
Okay,
I
think
I've
got
a
good
compare
of
this
versus
cube
edge
too,
but
could
you
maybe
also
say
a
few
words
about
how
this
might
contrast
to
the
super
edge
project?
Super.
B
They
are
coming,
they
they
share
open
yards
of
superiority.
They
share
a
lot
of
commonality,
so
is
the
the
you
can
treat
them
as
a
pretty
similar,
but
we
probably
will
diverge
in
some
places
like
the
energy
management.
B
Another
thing
is,
we
probably
will
address
some.
The
I
would
say
could
be.
I
also
addressed
something
in
the
about
the
autonomy,
in
a
sense
that
they
have
a
feature
saying
that,
if
one
nose
is
is
is
looks,
is
crashed
one
node
in
the.
If
one
of
these
crash
in
the
edge
side
they
they
provide
a
way
to
tell
the
abs
over
this
node
is
crash.
Please
evict
the
parts
they
have
some
feature
like
this.
B
A
A
I
don't
know
pre-composing
this
on
boxes,
they're
shipped
to
a
location
and
just
powered
up,
or
is
this
something
that
requires
at
least
a
little
bit
of
hands-on
during
the
maintenance
cycles?
Exactly,
I
think,
that's
a
very
good
question.
So,
when
we
discussed.
B
This
put
design,
I
actually
have
the
same
question
to
the
guys,
but
unfortunately,
currently
people
still
do
the
node
measurements
through
labels
kind
of
thing
you
request
at
least
one
menu
step
to
give
the
labels,
there's
no
automatic
automation
to
do
the
things
that
you
said,
okay,
giving
automatically
detected
all
the
reasons
all
the
regions
trying
to
avoid
this
kind
of
many
work
of
labeling
rose
note.
A
A
I
guess
the
other
thing
that
frequently
comes
up
in
these
groups
are
security
concerns
when
these
things
go
out
to
edge
locations
that
maybe
have
less
than
good
physical
security
on
these
notes,
and
what
your
risk
profile
is.
If
somebody
manages
to
go
at
one
of
these
edge
locations
and
say
physically
steals
a
note
or
something
what
are
in
terms
of
what
goes
on
there,
what
are
what?
What
are
your
risks?
I
think
it's
a
general
problem
so.
B
For
kubernetes,
so
for
security
we
might
you
know
we
probably
we
largely
leveraging
the
securities
packet
of
kubernetes.
So
the
problem
is
that
you
mentioned
steel
nodes.
You
know
applied
to
that.
But
some
other
aspect
is
that
this
currently
this
model
doesn't
support.
B
I
would
say,
although
kubernetes
has
multi-tenant
support,
I
don't
think
open.
The
other
is
ready
for
money
tendency
because
we
don't.
We
cannot
enforce
the
strict
deployment
strict
requirement
to
the
ad
side,
saying
that
you
have
to
run
your
container
in
the
sandbox
runtime.
We
cannot
make
that
assumption.
That's
the
reason
I
don't
think
opr
is
good
for
the
to
support
the
multi-tenancy
use
cases
other
than
that.
If
you
have
a
single
tenant,
so
you
as
a
single
tenant
own
the
entire
infrastructure
and
cluster,
so
you
should
be.
A
And
the
the
tunnel
you're
opening,
is
it
versatile
enough
to
accommodate
kind
of
a
a
primer,
a
concept
of
a
primary
network
and
a
fallback?
You
know
there
are
some
of
these
edge
scenarios
where
they'll
attempt
to
use
just
a
public
internet
connection
as
your
mainstream
way
of
getting
back
to
the
central,
but
also
have
a
fallback
that
goes
over.
I
don't
know
a
cell
service
or
something
or
even
a
satellite
ump
link,
but
it
isn't
the
preferred
one
because
maybe
it's
more
costly.
B
So
this
time
the
concept
is
pretty
simple.
Like
you
know
it
just
is
so
the
you
just
slapping
http
connections,
because
the
connection
initiate
initialize
from
the
edge
side
right,
you
just
just
clear
connection
to
the
server
side,
but
so.
B
Pretty
common,
so
we
don't
do
any
so
we
do
see
the
cases
that
there
is
complete
network
isolation,
which
means
that
the
even
the
the
nodes
in
the
ads
cannot
just
cannot
access
the
the
ipo
of
the
public
ip.
We
we
see
some
cases
that
to
support
that
this
tunnel
doesn't
work,
but
instead
we
will
support,
like
you
know,
like
private
network
kind
of
thing
you
need
to,
you
know,
has
a
vendor
routing
when
does
reach
support.
It's
hardware
switch
support.
Well,.
A
C
A
B
A
Okay,
I
think
I'm
I'm
out
of
questions,
but
thanks
it
was
really
interesting.
Does
anybody
else
have
anything.
C
A
A
Thanks
and
if
you
could
send
a
link
to
the
deck
I'll
post
it
to
the
group,
either
the
deck
itself
or
printed
out
as
a
pdf
or
something
in
case,
people
want
to
go
back
and
then
maybe
also
a
list
of
where
people
can
go.
I
know
you
can
go,
get
it
from
the
github,
but
links
to
your
slack
and
any
meetings
you
might
have
might
be
of
interest
sure
I
I
think
I
will
give.
B
You
a
pdf
version
of
the
stack
through
the
email
I'll.
Have
you,
okay,.
A
Okay,
well
thanks
again
that
was
really
informative
and
I
I
enjoyed
it.
A
You're
welcome
to
come
back
to
the
group
anytime
too,
for
updates
as
new
releases
come
out
or
if
you
even
want
to
get
feedback
from
potential
users,
and
we
routinely
talk
about
subjects
things
like
your
device
management,
so
there
might
be
an
opportunity
if
you
want
to
come
back
and
just
throw
out
questions
about
hey.
How
are
people
doing
this
and
what's
already
out
there,
because
yeah
a
lot
of
these
niches
have
kind
of
standalone
open
source
projects
that
are
really
interesting
in
getting
momentum?
A
A
Some
of
these
things,
I
think,
got
incubated
independently,
and
I
found
that
a
lot
of
these
projects.
They
don't
even
know
each
other
exists,
but
I
think
there's
opportunities
to
combine
these
kinds
of
things
together.
Exactly.
B
Okay,
yeah,
I
think
that's
a
good
idea,
so
yeah,
let
me
see
so
how
this
because
this
is
the
ongoing
project.
So,
let's
see
how
it
goes.
It
goes
well
with
ajax
foundry.
I
think
we
can
share
some
our
experiments,
parent
experience
and
show
some
even
some
demo.
If
you
guys
interest
in
this
meeting.