►
From YouTube: Centaurus Monthly TSC Meeting 1/25/2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
let's
get
started
today
is
to
give
some
update
on
the
edge
team.
We
had
a
talk
like
last
march.
I
think
right
at
the
beginning,
when
we
started
this
edge
effort.
So
today
is
just
to
give
you
guys
some
update
as
to
what
we
have
done
and
what
we
want
to
do
in
the
future.
A
I
have
about
37
slides,
so
I'm
gonna
go
through
it
rather
quickly.
If
you
have
questions
or
if
you
want
to
discuss
any
detailed
part,
we
can
have
a
talk
offline.
A
So
two
part
one
is
the
roadmap.
The
here
is
a
timeline
from
the
left
right,
so
we
started
about
the
february
last
year
and
the
the
project
is
called
the
fornix.
A
It's
on
the
github
under
centaurus
and
back
in
february
we
were
just
trying
to
define
the
scope
and
get
started,
so
the
first
release
was
in
august
last
year
that
point
we
are
starting
with
some
very
fundamental
features
like
the
edge
cluster
hierarchical
cluster
and
all
that,
so
that
release
was
on
8,
30
or
august
30.,
and
so
after
that
we
spent
quite
some
effort
on
networking
and
I'm
showing
them
from
bottom
to
from
bottom
up,
also
to
show
that
one
thing
is
built
upon
another,
so
we're
building
this
thing
both
go
just
along
with
the
time
and
each
feature
has
a
dependency
to
go
to
work
together
for
the
future
features.
A
So
we
started
with
the
edge
cluster
and
we
worked
on
the
the
multi-tenancy
edge
networking,
which
means
we
can
share
vpc
across
different
clusters
and,
like
workload,
can
you
can
ping
each
other?
Can
connect
talk
to
each
other
through
different
clusters,
so
that
was
the
first
release
for,
for
this
networking
part
is
september.
A
It's
we
call
this
9
30.,
but
back
then
it's
mostly
the
poc.
It
was
not
the
straightforward
change
because
we
have
to
work
with
miser
where
we
understand
miser
and
all
that
the
whole
feature
is
based
on
miser.
On
the
other
side,
we
don't
want
to
make
too
much
changes
to
miser.
So,
however,
it
works,
we
want
to
work
together
with
it,
so
at
9
30.
That
was
the
first
release
as
a
few
small
features
here
and
also
the
poc
was
working
for
this
part.
A
So
that's
the
9
30
was
still
the
poc
and
the
the
real
release
is
this
1
30
by
the
end
of
this
month.
That's
just
for
this
networking
part
to
work
correctly
with
all
the
real
features
as
a
as
a
release
code.
Instead
of
of
a
poc
okay,
going
from
the
end
of
this
month,
we
have
a
few
features
in
planning.
One
is
we
call
this
stateful
stateful,
serverless
or
stateful
workload,
and
I'm
showing
you
here.
One
is
the
the
part
that
I
just
spoke
of.
A
Is
this
cross-cluster
edge
storages
of
storage.
We
can
serve
both
the
serverless
workload
and
the
like
container
workload,
also
there's
effort
to
to
work
with
different
container
runtime.
I
will
talk
about
that
more
a
bit
and
another
feature
just
build
upon
this
networking
and
all
this
storage,
and
all
that
is
the
the
serverless
platform,
of
course,
and
also
we
want
to
work
on
the
scheduling,
because
once
we
have
all
these
different
components
build
up,
then
we
can
talk
about.
We
can
try
to
solve
the
problem,
whether
we
want
to
schedule.
A
How
do
we
want
to
schedule
the
workload
based
on
some
information
from
different
clusters
now
remember
for
the
edge
we're
now
dealing
with
one
one,
gigantic
data
center,
we're
dealing
with
a
bunch
of
smaller
clusters
or
a
single
node,
that's
distributed
in
different
network.
A
So
this
is
just
a
an
overall
picture
of
the
the
roadmap
of
the
activity
in
the
question
so
far,.
A
A
It
can
only
support
a
single
edge
node
on
the
edge
when
I
say
on
the
edge,
it
just
means
it's
not
in
the
data
center.
The
control
plane
is
in
the
data
center,
but
the
all
the
edge
nodes
are
somewhere
outside
the
data
center
network,
and
we
want
to
extend
this
to
say
on
the
edge
we
want
to
support
edge
cluster
instead
of
the
single
node.
So
we
achieved
that
also
the
edge
cluster.
A
It
can
be
different
flavors,
it
can
be
kubernetes,
it
can
be
arctos,
it
can
be
k3s
like
smaller
version
of
of
kubernetes
and
also
if
we
have
a
single
or
standalone
clusters
outside
this
main
control
plan
that
we
have
to
deal
with.
Okay.
How
do
we,
if
we
have
a
deployment
on
this?
How
do
we
send
the
deployment
to
this?
If
we
have
status?
For
example,
we
have
some
cluster
status.
We
have
some
workload
deployments
that
the
pod
status.
A
How
do
we
get
those
information
back
to
the
cloud?
So
we
have
all
this
here
is
just
another
way
to
show
this.
So,
in
addition
to
a
single
edge
cluster
outside
the
the
cloud
data
center,
we
also
have
a
way
to
stack
up
all
these
edge
clusters
in
a
hierarchical
way.
A
A
So,
as
you
can
see
here
at
this
point,
fornix
can
support
a
single
edge
node
since
single
edge
cluster
and
hierarchical
edge
clusters
and
each
of
this
clusters
on
the
edge
they
have.
They
have
their
control
plan.
So
the
user
has
an
option
to
control
all
of
this
from
the
cloud
or
they
can
go
into
the
single
control
plan
and
just
work
from
there.
A
This
is
the
debate
we
had
a
long
time
ago.
Do
we
want
to
just
have
a
single
single
layer,
or
do
we
want
to
have
a
hierarchical
layer
here,
we're
listing
the
pro
and
accounts?
That
means
it's
the
choice
we
want
to
provide
the
option.
We
want
to
provide
the
choices
to
the
user
and
it's
up
to
them
to
to
determine
which
one
is
the
right
one
for
that
they
have
their
different
use
cases.
Apparently
they
have
pro
and
accounts
in
in
different
field.
A
I
will
not
go
into
the
details,
but
yeah
just
this
is
something
that
we
are.
We
are
bringing
to
the
table,
something
new.
We
are
bringing
to
the
table,
especially.
B
And
if
you
are
sure
just
about
comment,
though,
if
you
go
to
a
previous
slide,
so
this
is,
you
can
see
that
you
know
shouting,
and
you
know,
stefan,
you
guys
are
familiar
with
a
lot
of
federation
work
going
on.
I
know
the
mesos
guys
did
federation
and
kubernetes
openstack
and
all
that,
so
the
picture,
the
model
2
on
the
it
is
very
different.
Actually
you
can
see
there's
no
one
central
control
plane
now,
so
the
control
plane
is
all
decentralized,
basically
yeah,
so
the
whole
federation
control
plane
could
be.
B
A
Okay,
thank
you.
So
this
is
the
model
if
you're
curious
about
how
this
works
in
details,
how
this
is
implemented.
This
has
to
do
with
question
of
what's
the
relationship
between
fornix
and
the
kube
edge.
Essentially
we
extended
the
kubernetes.
We
take
the
coupe
edge
code.
It
has
still
has
the
cloud
core
the
edge
core,
but
remember
kubernetes
can
only
deal
with
a
single
edge
node
and
we
extended
that,
based
on
the
kubernetes
infrastructure,
to
support
this
standalone
clusters
and
also
to
send
to
to
support
like
hierarchical
clusters.
A
So
here's
a
picture
of
of
the
component
that
we
added
to
support
the
edge
cluster,
the
for
coupe
edge.
It
has
this
entity.
We
did
not
touch
that,
but
in
parallel
to
this
we
have
we
developed
another
component
called
the
edge
cluster
d
or
cluster
d.
This
is
just
like
a
relay
component
to
handle
the
traffic
to
another
clusters
here,
yeah
and
also,
if
we
have
this,
we
can
do
this
recursively.
A
A
So
this
is
a
bit
of
details,
but
we
have
a
crd,
the
crd.
Will
it's
like
an
envelope?
That's
why
it's
showing
as
an
envelope
here
the
crd
essentially
will
carry
the
kubernetes
object's
definition
in
it,
and
it
will
also
have
a
target
cluster
and
the
the
crd
can
be
created
from
the
cloud
and
the
cd
will
be
propagated
through
different
clusters
to
the
target
cluster.
So,
for
example,
here,
if
this
cluster
sees
that
the
target
is
has
a
cell,
so
it
will
deploy
the
whatever
kubernetes
dot.
A
Kubernetes
object
like
a
pod
to
its
own
work
nodes,
and
also
you
will
see
that
oh,
the
target
is
not
just
me.
It
has
to.
It
has
some
other
clusters
that
that's
right
below
me
and
it
will
properly
propagate
that
envelope
or
that
crd
to
all
the
other
clusters.
C
Excuse
me
one
one
rather
technical
question:
I
guess
about
this:
how
do
you
specify
the
cluster?
Do
you
use
some
kind
of
let's
say
well
well-known
cluster
id
and
you
specify
that
statically
in
the
crd?
Well,
that's
one
question
and
the
other
question
is:
how
do
you
do
the
routing
I
mean?
How
do
you
know
where
to
send
the
the
the
envelope
next
so
to
say.
C
A
Currently
we're
using
a
label
to
specify
a
certain
class
and
we
can
use
like
a
well
card
to
say:
if
the
cluster
name
has
this
and
this
in
it,
then
it
will
take
when
that
cluster
sees
this
crd.
It
will
see
that
oh,
that
matches
my
name.
I
will
open
this
envelope
and
deploy
whatever
is
in
it,
and
then
each
cluster
has
information
about
what
the
cluster
is
right
below
it.
So
it
will
said:
go
down
there.
C
Okay
and
and
this
information
for
well
first,
the
name
specification
is
static
right.
You
would
basically
like
okay
in
in
in
the
crd
and
then
the
the
names
of
the
clusters
below
is
that
something
which
is
also
let's
say,
distributed
on
one
deployment,
so
to
say
so,
let's
say
like
a
config
file,
or
is
it
rather
some
kind
of
based
on
some
sort
of
discovery
service,
or
something
like
that.
A
Oh,
we
don't
have
the
discovery
service
yet,
but
but
in
the
in
about
15
minutes
I
will
go
into
a
section
where
that
there
is
some
we
developed
another
algorithm
to
to
to
just
for
that.
But
we
don't.
We
haven't
implemented
as
a
release,
yet
okay,
okay,
yeah,
but,
as
you
said,
the
goal
is
to
eventually
for
them
to
be
able
to
find
each
other
in
a
smart
way
here.
We're
more
aesthetic
like
okay,
yeah.
D
D
A
Okay,
okay,
so
this
is
how
we
send
a
deployment,
how
we
send
the
workload.
How
but
another
side
is
essentially
for
this
picture.
We
are
going
up
top
down
and
another
direction
is
to
go
bottom
up.
This
has
to
do
with
all
kinds
of
status.
We
need
to
know
the
status
of
the
cluster.
We
need
to
know
the
status
of
the
workload.
For
example,
if
I
have
a
cluster
here,
that's
showing
red.
That
means
that
the
cluster
is
not
healthy
and
then
in
the
upper
level.
A
We
need
to
have
that
information,
and
so
we
have
implemented
all
this.
Just
it's
not
a
because
now
we're
dealing
with
we're
dealing
with
multiple
levels,
so
the
it
has
to
go
one
level
up
and
up.
So
it's
a
it's
more
of
a
relay
information.
So
if
this
is
red,
it's
gonna
tell
its
parent
cluster
that
the
the
the
status
of
this
cluster
is
not
healthy
and
his
parent
will
keep
reporting
it
all
the
way
up.
A
Okay,
so
this
is
going
top
down
and
bottom
up
in
this
hierarchical
cluster.
A
Another
thing
is
that
we
did.
Is
we
want
to
show
that?
Okay?
How
do
we
deploy
a
more
complex
workload
right
here?
We're
talking
about
okay
in
this
envelope?
We
can
put
a
deployment
in
it,
we're
going
to
put
a
party
in
it,
but
in
reality
the
application
is
more
complicated.
They
have
dependency
among
each
other.
A
If
we
have
a
setup
like
this,
and
we
want
to
deploy
this
workload
all
the
way
to
the
edge
sure
we
can
do
that,
but
the
workload
here
sometimes
it
has
its
front
end
the
back
and
database,
and
they
depend
on
each
other
and
when
we,
when
the
user
need
to
set
these
things
up,
they
have
to
do
some
configuration
right.
It's
not
just.
We
have
a
deployment.
A
A
What
need
what
you
need
to
care
is,
or
what
you
want
to
see
here
is
all
those
commands
like
this
right.
Those
do
not
belong
to
a
certain
kubernetes
object,
but
we
need
to
have
this
so
that
if
we
have
a
workload,
if
we
have
an
application
like
this,
it
has
the
database
at
front
end.
We
have
some
storage
here
and
processing
here
we
have
all
these
different
moving
parts
working
together,
there's
some
command
required,
so
we
are
supporting
this
at
this
time
now.
A
Okay,
so
that's
the
first
part
it's
about
the
edge
cluster.
So
just
a
quick
summary:
we
have
the
we
can
run
cluster
on
the
edge.
We
can
write
hierarchical
cluster
on
the
edge
we
can
get
the
status
and
we
can
send
a
deployment
and
we
can
send
pod,
for
example,
to
a
certain
class
or
a
certain
set
of
clusters,
and
we
also
allow
to
do
some
configuration
when
the
user
wants
to
configure
something.
A
The
goal
like
conceptually
the
goal
for
this
part
is
just
say:
we
have
a
pod.
We
have,
for
example,
we
have
two
paws
here
in
in
this
picture.
They
are
running
in
different
subnets,
but
they
are
in
the
same
cluster.
It's
showing
the
cluster
one
and
they
are
in
the
same
vpc,
the
subnet
and
the
vpc
is
implemented
using
miser.
A
So
but
conceptually
we
just
have
a
a
vpc
in
a
cluster
we
have
two
subnet
and
in
each
subnet
we
have
one
part.
Okay
and
apparently,
when
we
talk
about
the
networking,
the
two
plus
wants
to
talk
to
each
other.
A
Okay,
so
with
mesa,
we're
able
to
do
to
do
this
already.
So
in
one
cluster,
everything
is
fine.
Then,
when
we
move
this
to
different
cluster,
then
it
gets
a
bit
complicated
because
now
subnet
one
is
in
cluster.
One
subnet
two
is
in
cluster
two
and
the
part
two
at
this
point
is
in
subnet
two,
so
they
are
separated
in
different
clusters.
A
A
The
basic
concept
is
still
based
on
user,
so
it
means
if
you
know
how
it
works.
It
has
a
bouncer,
divider
and
pod.
If
the
pod
wants
to
talk
to
some
other
part,
it
would
the
traffic
will
go
from
from
this
node
to
the
bouncer
node.
If
the
bouncer
finds
that
okay,
it's
going
to
a
different
subnet,
then
the
bouncer
will
simply
send
that
to
the
divider
and
divider
will
take
it
from
there.
Usually
it
would
send
it
to
another
bouncer,
that's
responsible
for
the
other
subnet.
A
But
if
we're
talking
about
subnet
in
a
different
cluster,
then
the
divider
should
cannot
just
simply
send
it
to
the
corresponding
bouncer.
Now
we
added
this
gateway
component
for
each
cluster.
We
have
a
lot
of
conversation
here.
The
overall
goal
for
this
part
is:
we
do
not
want
to
have
a
centralized
gateway.
So
if
we
have
10
000
edge
clusters,
we
do
not
want
this
10
000
edge
cluster
to
talk
to
the
same
gateway.
We
want
them
each
to
have
their
own
gateway
and
they
can
talk
about
each
other.
A
So
the
purple
here
the
purple
circle
here-
is
to
show
that
we
are
still
using
the
miser,
but
the
divider
now
will
send
the
traffic
to
the
gateway,
and
then
we
will
see
that
okay,
who
has
this
subnet,
who
has
this
definition
subnet
right
in
this
case
it's
gateway
one,
so
the
packet
will
be
sent
to
gateway
one
and
gateway
one
will
send
to
its
own
divider
and
divide
it
to
a
sponsor
and
so
on
so
forth.
A
A
So
the
implementation
is
here
is
more
details
on
this,
but
essentially
the
implementation
is
in
a
certain
cluster
for
any
subnets.
That's
not
in
this
current
cluster.
We
we
we
just
dedicated
this
gateway
host
as
the
bouncer
node
right,
so
that,
but
for
me,
in
one
cluster,
it
still
thinks
it's
talking
to
some
bouncers
right.
So
here
on
the
left,
that
is
two
subnet.
They
belong
to
this
cluster
on
the
right.
We
have
two
subnets.
They
do
not
belong
to
this
cluster,
but
in
for
me
sorry
in
this
current
cluster.
A
It
still
thinks
that
we
have
subnet
three
and
four
in
this
cluster,
so
the
traffic
is
still
gonna.
Go
from
this
sponsor
to
this
divider
to
the
other
bouncer,
but
on
this
gateway
host
we
have
made
changes
such
that
the
traffic
will
not
be
processed
by
the
sponsor
anymore.
The
bouncer
does
not
really
do
anything
for
this
traffic,
it
will
just
send
it
to
the
user
space
and
we
have
a
gateway
agent
running
in
the
user
space
to
send
the
traffic
out
here
is
another
way
to
show
this.
A
It
just
has
more
information,
but,
as
you
can
see,
we
have
a
pod
here.
The
traffic
will
go
from
its
own
bouncer
to
the
divider
to
this
gateway
host
and
the
this
transit
xdp
is
part
of
miser,
but
for
this
guy
it
does
not
really
do
anything
for
this
traffic.
It
just
simply
sees
that.
It's
going
it's
going
to
another
subnet
that
I
don't
have
in
this
cluster.
It
will
send
a
packet
to
the
user
space
gateway.
The
gateway
will
send
the
traffic
to
the
other
side
and
it
goes
the
reverse
way
in
other
cluster.
C
I'm
sorry
one
question
also
about
this,
so
this
is
basically,
if
I,
if
I
want
to
let's
say,
connect
two
clusters
right.
I
need
to
do
this
vpc
peering
first,
and
that
would
imply,
I
guess
provisioning.
These
two
gateways
and
connecting
them
together
is
is
that
first
of
all
corrected
as
the
assumption,
I
would
say
yes,
yeah,
okay,
so
so
how
do
so
so
basically
is
that
is.
C
Is
it
the
case
that
each
cluster
comes
with
a
with
a
gateway
with
one
single
gateway
host,
or
is
it
possible
for
me
for,
for
example,
to
have
multiple
gateway
gateway
hosts
in
a
single
cluster,
so
that
I
can
do
this
vpc
appearing
with
multiple
vpcs.
A
Oh
yeah,
a
good
question.
The
answer
is
at
this
point:
we
only
have
one
gateway
in
each
cluster.
The
gateway
does
not
work
for
a
single
vpc.
The
data
will
work
for
all
the
vpcs
okay,
but
in
our
planning
we
say
in
the
future
we
can
add
a
multiple
gateway,
but
it's
a
different
purpose.
It's
not
just
for
each
of
epc.
It
was
just
for
like
for
tolerance,
purpose.
C
A
During
runtime
as
well,
the
gateway
agent,
as
you
see
here,
it
will
watch
whatever
is,
is
running
in
the
current
cluster.
For
example,
it
has
miser
here
it
has
all
those
misal
objects,
bc,
subnet
and
all
that
the
gateway
will
watch
that
and
take
those
information
and
use
that
to
do
the
communication
and
also
we
have
a
way
to
to
allow
all
the
gateways
to
talk
to
each
other
so
that
everybody.
C
A
In
sync,
for
example,
if
we
have
a
traffic
coming
in,
how
does
this
get?
We
know
we
want
to
send
this
information
to
this
guy.
We
have
a
way
to
go.
Yeah
we
have
to,
but
those
information
has
to
be
updated.
Constantly.
Yes,
exactly
it's
like
a
bbtp,
basically
bgp.
Does
that
and
also
we
want
to
do
this
in
an
efficient
way.
We
don't
want
to
say
this
gateway
will
have
all
the
information
of
all
the
gateways
in
all
the
clusters,
so
we
we
have
a
way
to
to
deal
with
that.
C
A
B
E
The
packet
itself
is
already
encapsulated
with
overlay
when
the
gateway
forward.
This
page,
oh.
A
I
see
we're
saying
yes,
yes,
yes,
yeah,
the
the
answer
is
yes,
so
the
packet
coming
from
miser
is
the
genie
packet.
C
A
In
between
the
gateway
is
not
genetic
packet
anymore.
Well,
it's
the
geneva
is
packed
in
as
like
a
workload
and
we
put
another
layer
on
top
of
it.
A
E
A
True,
yes,
yes,
because
all
these
clusters,
they
are
in
different
locations
of
the
internet.
If
they
want
to
talk
to
each
other,
they
have
to
have
at
least
one
different
public
ip.
E
Yeah
yeah,
I
understand
I
mean
let's
say
this:
that's
a
class
a
has
a
10
physical
hosts.
It
probably
like
has
10
internal
ips
for
private
ips
for
each
host
and
has
a
public
ip
for
its
gateway
right
and
the
same
situation
for
class
b.
E
Now,
for
for
this
20
private
ips,
do
they
have
to
be
unique?
No,
no.
A
B
Because
you
know
that
in
media
it
has
its
own,
you
can
have
your
own
private,
ips
and
all
that
so
the
but
the
the
moment,
the
traffic,
the
packet
goes
from
one
gateway
to
another.
From
then
on.
It's
like
me
job,
basically
it's
so
it's
all
controlled
by
me.
So
you
can
have
your
own
private
ips
and
everything
that
doesn't
change.
E
I
see
so
your
so.
A
Say
for
this
host
it
doesn't
matter.
What
is
private.
Ipa
is
the
private
ip
in
this
cluster
is
only
used
to
send
a
package
from
like
host
to
send
this
part,
the
packet
to
the
gateway
and
on
the
gateway
in
the
package.
The
like
outer
ip
outer
hardware
address
altered.
All
those
addresses
will
be
changed
based
on
the
where
it's
going.
E
E
And
also
for
this
is
for
misa
implementation,
because
it's
based
on
sponsor,
so
you
do
not
need
to
consider
this
host
a
private
host
ips,
but
for
other
vpc
implementation
they
usually
will
try
even
for
the
first
package,
they
probably
will
send
the
package
directly
to
the
other
host.
So
if
there's
another,
there
are
overlapping
private
ips,
there
might
be
trouble
for
other
vpc
implementations.
I.
B
Are
the
other
vpc
means
different
non-vita
vpc,
you
think
yeah.
E
B
A
In
the
current
implementation,
we
did
change
a
bunch
of
parts
immediately
so
that
it
will
do
a
bunch
of
things,
for
example,
especially
on
this
gate.
We
host
this
transit
sd.
We
have
changes
there
to
to
do
all
those
things
and
the
direct
pass
is
quite
a
headache.
It
took
us
probably
a
month
to
figure
out
how
to
work
around
with
that.
B
B
A
This
is
a
very
interesting
good
question,
though
I
have
not
thought
about
the
requirement
of
all
this
private
ip,
but
it's
good
that
the
if
we
think
outside
the
misers
world
that
that
might,
I
probably
will
take
a
bit
of
time
to
see
in
that
yeah.
E
Because,
ideally,
we
shouldn't
care
about
this
private
type,
but
not
not
our
current
vpc
implementation.
They
assume
one
one
physical
node
can
talk
to
another
physical
model
directly.
If
we
saw
if
we
solve
it
by
the
l2
gateway,
then
if
they
are
overlapping
ip,
we
will
have
some
trouble
yeah.
B
E
B
A
But
anyway,
okay,
good
question
the
so
in
all
the
in
this
whole
picture.
In
this
implementation,
the
beauty
part
is
eventually
our
final
final
design
is
able
to
work
seamlessly
with
miser
for
miser.
It
still
thinks
it's
working
by
its
own
logic,
but
for
us,
it's
not
a
hack,
but
we're
able
to
see
take
the
traffic
from
easer,
send
it
out
and
receive
that
all
the
way
and
we
are
able
to
work
with
the
direct
pass
just
how
it
works.
It
means
okay.
A
Just
now
we
mentioned
this
question.
Just
remember
this
color
this
purple.
Whenever
we
see
purple,
it
just
means
gateway,
and
here
we
are
talking
about
two
clusters.
A
In
reality,
we
could
have
a
bunch
of
gateways
they
connect
to
each
other.
So
the
circle
here
just
means
they
are
building
their
own
communication
circle
and
each
gateway
will
have
some
information
to
share
with
other
gateway.
For
example,
this
gateway
x
will
have
a
vpc
one
subnet,
five
subnet
zero
and
the
other
gateway
will
have
a
subnet
for
and
all
this,
so
they
they
own
their
own
information,
and
they
want
to
share
some
information
with
other
data.
A
How
do
we
do
that
here
is
a
detailed
design
of
what
goes
into
the
gateway
agent?
I
will
not
go
into
the
details,
but
conceptually
it
will
watch
all
those
measures
object
in
its
own
cluster
and
also
it
will
watch
other
gateway.
So
there's
a
communication
between
different
gateways
whenever
something
happens
that
something
could
be.
A
For
example,
we
have
a,
we
have
this
cluster
set
up
and
some
cluster
created
another
subnet.
This
information
is
showing
here
this
information.
This
new
subnet
3.1
has
to
be
propagated
to
a
certain
number
of
clusters.
It
doesn't
have
to
be
propagated
to
all
the
clusters,
but
they
need
to
be
propagated
to
the
cluster.
That
also
has
vpc3
right.
If
another
cluster
here
it
has
a
vpc3.
A
That
means
there
could
be
some
communication
going
between
that,
but
for
other
cluster
that
does
not
have
a
vpc3,
then
they
don't
have
to
know
because
there's
no
possibility
of
traffic
going
whenever
they
have
a
vpc3.
A
They
can
talk
to
each
other
and
we
need
to
to
to
update
all
those
information,
so
the
bottom
line
here
is.
We
have
a
way
to
have
all
these
gateways
to
work
together
so
that
whenever
we
create
a
new
vpc
or
we
create
a
new
subnet
or
some
other
network
object,
the
information
can
be
propagated
in
an
efficient
way
to
whoever
cares
about
this
information.
A
This
is
still
in
poc.
We
don't
have
this
yet
so
far.
The
the
edge
networking
implementation
is
about
the
connectivity.
It's
about
the
part
of
one
or
part,
zero
can
campaign
another
part
in
another
cluster.
We
are
assuming
that
all
those
all
these
metadata
are
there,
but
going
from
130.
We
want
to
put
this
in
so
that
it
will
eventually
become
a
self-sufficient
system
if
someone
creates
a
vpc
or
subnet
somewhere.
A
A
Okay,
so
that's
what
we
have
about
the
the
networking
on
the
edge.
Just
a
quick
summary
two
section
one
is
about
the
connectivity.
It's
about
the
the
one
part
campaign,
another
part
in
a
different
cluster.
The
second
part
is
about
all
this
metadata
syncing
and
all
this
and
putting
this.
If
we
put
these
two
parts
together,
then
all
this
vpc
subnet
they
can
function
together
and
as
for
the
user,
it
doesn't
have
to
care.
A
My
application
is
in
one
class
or
another
cluster,
because
the
application
can
find
each
other
in
a
smart
way,
so
the
user
does
not
have
to
know
okay,
so
the
third
part
is
about
storage.
This
is
pretty
new.
This
started
right
before
new
year,
so
this
is
looking
ahead
to
see
what
we
want
to
do
after
1
30..
So
I'll
go
a
bit
quickly.
Here
one
is
storage.
A
We
have
those
luxury
storage
solutions
in
the
cloud.
When
we
are
running
this
on
the
edge,
then
it's
a
different
story,
because
the
store
the
storage
is
too
far
from
the
edge
right.
Whenever
we
will
have
some
data,
we
want
to
store
some
state.
For
example,
we
don't
want
to
go
to
the
cloud
and
fetch
the
information
or
store
the
information
there
and
another
node
when
the
function
runs
there.
It
needs
to
go
to
the
cloud
every
time
you
go
to
the
cloud,
it's
a
very
long
trip.
We
don't
want
to
do
that.
A
We
want
to
keep
those
information
at
the
edge
to
have
all
the
benefit
of
edge
computing,
so
the
current
and
the
current
thinking
is
for
each
edge
cluster.
We
will
have
a
storage
abstraction,
the
storage
abstraction
you
can
think
of
this
as
like
a
kv
store
implementation,
but
the
difference
between
the
current
implementation,
the
current
option
and
what
we
want
to
have
is
it
has
to
be
able
to
go
across
different
clusters.
A
So
that
should
remind
you
a
little
bit
of
the
edge
communication
part
that
we
just
spoke
about,
so
the
implication
is
when
we
have
the
edge
network
running,
then
we
are
able
to
build
this
storage
abstraction
on
top
of
that,
so
that,
for
example,
in
this
picture,
I
have
this
one
cluster
here,
one
class
here,
the
storage
abstraction
can
talk
among
them
just
so
that
if
we
have
some
storage
information
here,
we
have
some
tv
values
here
we
can
sync
that
to
the
other
side,
without
the
user
has
no
and
the
function
can
run
on
top
of
it.
A
A
The
different
ways
we
want
to
do
this
here
is
just
to
show
you
that
we
are
looking
at
a
solution
called
ana.
We
have
a.
We
have
a
research
going
on
about
all
these
possible
solutions,
I'm
just
using
this
and
as
one
example.
So
anna
supports,
like
the
tv
store
in
one
cluster.
A
It
actually
supports
that,
like
within
a
certain
node
that
you
have
multiple
threads,
they
can
also
work
with
each
other,
but
it's
working
in
the
crdt
model,
the
consistency
model,
so
that
the
the
boundary
of
anna
is
one
cluster
and
we're
just
looking
into.
How
can
we
extend
this
to
different
clusters
using
the
edge
networking
we
have
so
that
the
storage
can
be
distributed
among
all
these
clusters
and
for
user,
because
the
information
will
be
propagated
to
them
automatically
based
on
certain
criterias?
A
The
user
does
not
have
to
know
right
if
the
user
is
here,
it
puts
some
information
there
and
when
the
user
goes
to
another
cluster.
For
some
reason,
they
should
still
expect
to
be
able
to
see
that
that
information
they
just
put
in
here
here
is
some
more
information
if
you're
curious
about
crdt
consistent
model
and
how
anal
works.
Essentially,
it's
a
kb
store
here
is:
you
can
think
of
this
as
one
node
or
one
thread
it
doesn't
matter,
it's
it
has
is
implemented
under
the
same
abstraction.
A
So
we
have
a
key
v
values
and
we
do
some
update.
For
example,
we
can
update
all
these
values
and
at
different
time
we
have
k2
here
k2
here
k2
here,
so
as
you
can
see,
there
may
be
a
conflict
going
on
first
time
we
updated
here
the
second
time
we
were
updated
here.
Then
they
have
different
different
values.
Now
then,
the
crdt
model
is
a
eventual
consistency,
which
means
the
all.
B
Yeah,
because
it's
not
just
eventual
consistency,
though
it's
a
strong
eventual
consistency,
because
I
think
that
there's
a
difference,
though,
because
in
the
eventual
consistency
you
know
some
replicas
can
go
out
of
sync.
Actually
you
see
you
can
it
may
have
incorrect
information,
but
as
cc
crdt
doesn't
allow
you
to
do
so.
It
maintains
the
safety
as
well.
Basically,.
A
Yeah-
and
you
may
wonder
the
reason
why
do
we
want
to
use
this
in
the
first
place?
Why
do
we
not
have
this
for
the
cloud
and
the
answer
has
to
do
with
for
edge
computing?
We
have
to
assume
a
few,
a
few
issues,
one
the
the
connectivity,
the
reliability
and
for
tolerance.
A
All
these
things
is
different
from
the
cloud
in
the
cloud
you
have
all
this
well-maintained
data
center
for
the
for
the
edge
things
are
distributed,
the
the
connection
may
come
and
go
so
we
cannot
say
when
we
write
to
a
certain
replica
we
have
to
assume,
or
we
have
to
wait
until
everyone
every
single
one
of
this
to
to
come
in
sync.
We
cannot
do
that.
Wait
because
sometimes,
if
this
node
or
the
connection
goes
away,
it
will
never
happen.
B
A
Also,
we
are
interested
in
how
this
would
affect
scalability
performance
right
if
we
are
able
to
ride
to
a
certain
replica
without
having
to
wait
for
another
one-
the
performance,
if
you
imagine
it,
should
be
better,
but
there's
also
other
overhead
coming
in
when
we
try
to
sync
between
them.
So
there's
all
kind
of
questions
we
need
to
answer
in
this
picture.
A
This
is
how
ana
works
internally.
I
will
not
go
inside
here.
Just
imagine
that
it
has
a
bunch
of
vectors
here.
It
can
talk
to
each
other
and
it
can
sync
among
them
with
zero
mq.
This
is
how
ana
works
among
different
nodes,
we're
still
working
with,
but
the
the
as
I
said,
the
code
encoder
boundary
of
n
is
all
this
node
currently
has
to
be
within
one
cluster,
and
we
want
to
extend
this
to
to
put
different
nodes,
put
them
on
different
nodes
in
different
clusters.
A
The
question
we
we
are
interested
in
for
with
all
this
serverless
all
this
stateful
thing
is
for
the
edge
computing.
There's
data
there's
computation.
Sometimes
they
do
not
end
up
on
the
same
clusters
right
then.
How
do
we
decide?
Do
we
want
to
move
the
function
or
do
we
want
to
move
the
data
with
the
with
the
crdt
model?
Then
it's
easier
because
the
data,
by
definition,
will
move
around
by
itself,
but
also
there
could
be
the
consistency
issue
there.
So
we
don't
have
a
good
answer
to
this
question.
A
A
Also,
one
effort
for
the
edge
is
the
workload
runtime
and
platform
bits
are
two
different
things
for
the
runtime.
We
already
have
docker
running.
We
can
so
we
want
to
support
other
kind
of
runtime.
So
quark
is
another
effort.
That's
just
started
like
a
month
ago,
and
so
this
has
a
few
benefit
of
performance
security
comparing
to
the
traditional
containers.
So
this
is
the
runtime
part.
A
The
platform
part
is
essentially
we
want
to
say:
we
don't
want
the
user
to
have
to
deploy
different,
the
k3s
or
kubernetes
and
then
install
serverless,
and
all
that
things
we
want
to
provide
a
a
standalone
like
binary
or
standalone,
build.
So
the
user
say
I
want
to
run
serverless
at
a
certain
cluster.
Then
they
can
just
deploy
this
platform
and
then
it
they'll
have
all
the
function
of
networking.
They'll
have
the
function
of
serverless
support
and
all
that
I
tried.
A
A
A
One
thing
is
about
the:
when
we
talk
about
schedule,
we
it's
the
scheduling,
will
happen
based
on
a
certain
information
right.
Where
do
I
want
to
put
something
somewhere
and
based
on
the
different
metrics?
Then
we
want
to
say,
for
example,
if
we
have
a
different
load
in
different
clusters,
one
cluster
is
overloaded,
like
in
this
case
one
cluster
is
overloaded.
A
The
other
cluster
is
is
okay,
then,
when
the
the
scheduling
should
be
able
to
accommodate
this
should
be
able
to
say
I
want
to
move
some
of
the
workload
from
this
to
the
other
cluster
or
where
I
deploy
another
workload.
I
want
to
send
it
to
another
cluster,
but
again
that
goes
back
to.
Where
is
the
data
right?
If
the
data
is
here,
how
do
we
move
that
then?
So
this
is
one
kind
of
metrics
we
want
to
work
on.
A
This
is
a
initial
list
of
metrics
that
we
think
we
care
about,
for
example,
there's
also
energy
there's
a
bandwidth
of
the
kind
of
network
5g
or
wi-fi,
and
all
that
so
there
may
be
a
way
to
put
all
this
information
to
consolidate
all
this
into
one
vector
of
metrics
or
data,
and
then
the
scheduling
will
work
based
on
this.
A
Another
thing
that
we
want
to
talk
about
in
terms
of
scheduling
is
when
we
do
the
scheduling.
If
we
have
the
metrics,
we
want
to
do
the
schedule.
There
are
different
ways
to
do
the
scheduling.
There's
you
can
do
it
manually
say
I
see
one
workload.
One
cluster
is
overloaded.
I
can
put
the
workload
on
another
cluster
or
I
can
do
this
on
demand
based
on
who
is
or
being
overloaded
or
we
can
do
this
automatic
or
we
can
do
it
smart
way.
Proactive,
proactive,
just
means
we
expect.
A
A
When
we
talk
about
this
smart
way,
one
example
is
in
the
mec
situation,
where
for
edge
computing,
we
use
that
for
the
vehicle
and
the
vehicle
is
talking
to
a
certain
cluster
and
when
the
vehicle
is
moving
based
on
the
direction
of
this
vehicle
moving,
we
can
try
to
guess
which
cluster
it
would
we
will
attempt
to
connect
to
in
the
future.
It
could
go
if
the
vehicle
turned
this
way.
It
could
end
up
talking
to
this
cluster
or,
if
it
continued
this,
we
will
end
up
talking
to
these
clusters.
A
This
is
the
what
I
talked
about
now
when
we
say
it's
proactive,
the
the
ideally
the
best
way,
but
it
will
take
more
effort
there.
A
Okay,
so
those
are
the
all
the
components
we
have
done
and
we're
thinking
about
doing
here
is
just
I
put
them
all
together,
and
what
I
want
to
show
here
is
they
have
a
relationship
we
started
from,
we
started
from
here,
and
we
built
a
networking
on
top
of
this
cluster.
We
are
building
the
storage
component
on
top
of
all
this
and
at
the
same
time
we
have
this
run
time.
We
have
the
platform
and
scheduling
could
happen.
It
has
to
rely
on
some
of
this.
It
probably
there's
other
components.
A
The
scheduling
has
to
rely
on
and
we
are
also
working
with
stefan
about
what
else
we
want
to
do
with
the
scheduling.
How
do
we
meet
those
requirements?
How
do
we
meet
all
those
metrics?
But
the
point
I'm
trying
to
make
here
is:
there's
a
dependency
between
them.
The
edge
team
at
the
future
way
is
or
the
for
furnace
project.
We
are
working
on
all
this
blue
part
and
we
haven't
got
to
this
yet,
but
we
want
to
provide
all
the
other
components
to
support
the
future
of
this
feature.
A
Okay,
so
just
go
back
to
the
timeline
to
the
roadmap.
We
are
about
here
this
right
past
january,
we're
looking
at
what
we
want
to
do
past
the
130
release,
and
also
we
have
this
two
part
being
planned,
but
we're
not
there
yet,
because
some
of
this
component
has
to
be
more
mature
before
we
could
have
a
very
solid
implementation
here.
B
Thanks
thanks,
this
was
a
great
great
time,
so
so
stefan,
I
think
so
we're
gonna.
We
obviously
we're
gonna
talk
more.
You
know
we
have
a
meeting
another
9
30
meeting.
B
So
we'll
talk
more
about
the
synergy
of
what
we
currently
have
and
how
you
know
the
the
collaboration
work
which
we
planning
to
do,
how's
that
gonna
basically
work.
C
B
Okay,
then,
if
not,
this
is
pretty
much
it
then.
So
we
can
end
the
meeting.