►
From YouTube: Nephio Project Overview & Demo
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
If
you
look
at
Edge
workload,
orchestration
and
management
presents
new
challenges
and
by
agile
workload
we
mean
the
network
service
or
an
edge
native
application,
and
the
challenges
that
we
see
are
one
is
the
scale
you
could
have
tens
of
thousands
of
edge
sites
with
hundreds
of
workloads.
The
second
is
infrastructure
dependency.
So,
interestingly,
with
the
cloud
we
try
to
decouple
ourselves
from
the
infrastructure.
A
A
Now,
what
if
we
could
use
kubernetes
to
solve
this
problem-
and
the
answer
is
you
actually
can?
What
was
not
known
to
me
is
that
kubernetes
is
a
control
plane
that
can
do
more
than
container
orchestration.
I
had
always
assumed
that
you
know
that's
what
kubernetes
was
for,
but
actually
it's
a
general
purpose,
control,
plane
and
container
orchestration
was
just
the
first
one
and
we'll
talk
about
how
kubernetes
can
be
used
to
solve
the
edge
orchestration
and
management
problems
that
I
just
outlined
all
right.
A
So
introducing
nephew
nephew
is
a
new
latex
Foundation
open
source
project.
It
is
seeded
by
Google
it.
It
has
been
around
for
just
a
few
months.
The
first
developer
Summit
happened
in
June
of
this
year.
A
It
is
kubernetes
based
100
kubernetes,
based
intent,
driven
orchestration
and
management
framework
for
Network
Services,
hedge
Computing
applications
and
the
underlying
infrastructure,
and
that,
in
a
sense,
addresses
the
challenge
that
I
mentioned
completely.
It
is
multi-vendor
cloud
and
Edge
infra-oriented,
so
you
can
orchestrate
Edge
and
Cloud
infrastructure
across
multiple
vendors.
A
A
So
why
nephew?
There
are
other
Solutions
out
there.
I
would
say.
The
biggest
reason
for
nephew
is
the
scale
when
you
look
at
tens
of
thousands
of
sites,
tens
of
infrastructure
providers
hundreds
of
workloads.
This
is
a
scale
we
have
not
seen
before
and
because
of
that,
you
need
a
new
solution
and
that's
where
nephew
comes
in,
it's
suitable
for
multiple
sites,
it's
intent,
driven
and
that
again
helps
with
scalability,
and
there
is
constant
reconciliation
of
state.
A
If
you're
just
a
few
sites
you
can
manually
reconcile,
but
at
this
scale
you
have
to
have
constant
automated
reconciliation
and
day
one
day.
Two
is
also
taken
care
of
by
the
same
mechanism
which
lends
itself
to
scalability.
You
have
CI
CD
baked
in.
As
you
will
see,
it
goes
all
the
way
from
infrastructure
to
workload
so
on
demand,
distributed
clouds
and
it's
suitable
for
infrastructure
and
workloads.
And
finally,
it's
heterogeneous.
So
it's
meant
for
multi-vendor
environments.
It
can
handle
public
and
private
clouds.
A
What
type
of
problems
can
nephew
solve?
These
are
just
three
examples.
There
are,
you
know,
there's
no
limit
to
what
type
of
edge
applications
you
can
use
nephew
for
the
first
one
is
multi-vendor
Edge
Services
brokering
here
at
Telco
may
want
to
be
the
single
point
of
contact
for
Enterprises
and
provide
them
Edge
from
different
vendors
as
per
their
need.
For
example,
if
somebody
says
I
want
an
edge
that
is
five
milliseconds
from
such
and
such
a
location
it
automatically
the
the
Telco
automatically
finds
the
right.
Edge,
provides
it
and
essentially
provides
a
brokering
service.
A
A
The
nephew
architecture
at
a
very,
very
high
level
is
as
follows:
you
first
state
your
intent.
Everything
is
declarative
and
intent
driven.
So
the
intent
explains
to
nephew
what
the
infrastructure
requirements
are.
What
were
the
network
function?
Originative
application
requirements
are,
and
on
top
of
that,
the
network
service
or
the
composite
application
then
based
on
the
intent,
the
orchestration
happens,
you
validate
the
intent
and
then
you
deploy
it,
and
that
goes
into
a
control
Loop,
where
you're
constantly
monitoring
the
application
and
then
reconciling
the
state.
A
I'm
going
to
be
very
quick
here,
nephew
extends
kubernetes,
it
has
three
pillars:
each
one
based
on
custom
resource
definitions,
which
is
a
way
to
extend
kubernetes
apis
and
under
a
crd,
you
have
operators,
the
three
pillars
for
operators
are
Cloud
infra,
where
you
orchestrate
the
cloud
infrastructure
itself
cloud
and
Edge
workload,
resource
automation
where
you
orchestrate
Network
Services
and
Edge
Computing
applications
and
then
workload
configuration.
A
So
these
are
the
three
pillars
that
that
nephew
is
addressing
and
the
three
key
principles
I
won't
go
into
them
are
intent
based
automation,
it's
all
based
on
intent.
We
looked
at
one
example.
You
can
see
another
example
here
declarative
configuration
it's
very
important
in
nephew
to
be
declarative
and
not
have
any
imperative
information
in
the
intent.
So
in
fact
it's
called
configuration
as
data
and
non-complex.
So
very
simple,
kubernetes
Cloud
native
principles
are
very
simple
and
those
are
the
ones
that
are
being
used.
A
Ci
CD
is
automatically
baked
in
Sandeep
will
hint
at
this,
and
with
that
I'm
going
to
hand
it
over
to
my
colleague
Sandeep
to
walk
through
the
demo.
B
B
Okay,
so
in
this
demonstration,
what
we
are
going
to
show
is
the
orchestration
of
the
infrastructure
and
then
the
automation,
which
basically
prepares
that
infrastructure
for
the
workload
workload
deployment
followed
by
that
we
are
going
to
orchestrate
workload
and
workload
in
our
case
is
a
caching
DNS
application.
So
we
will
see
how
the
placement
intent
that
is
specified
during
the
workload
orchestration
selects
the
cluster,
which
are
created
clusters
which
are
created
by
the
the
the
infra
automation
that
we
are
going
to
show
followed
by
that
there
will
be.
B
There
is
an
example
of
day
0
configuration
of
the
caching
DNS
application.
So
in
this
specific
use
case,
the
configuration
will
be
based
on
the
cluster
type.
So
the
controller
is
basically
going
to
look
at
the
type
of
the
cluster
and
then
based
on
that
it
will
configure
the
scaling
profile
in
the
application
dynamically
and
then
it
will
basically
deploy
the
curated
package
to
the
Target
infra.
B
Thank
you
so
this.
So
this
is
a
very
high
level
diagram
of
the
nephew
platform
and
the
Three
Edge
clusters.
So
what
we
see
in
the
nephew
platform
are
primarily
two
kind
of
clusters.
One
is
the
workload
cluster
and
another
is
the
infrastructure.
One
is
the
workload
controller
and
another
is
the
infrastructure
controller.
B
The
infrastructure
controller
underneath
is
going
to
use
the
the
cross
plane
and
we
are
going
to
orchestrate
eks
cluster
and
the
the
nephew
control
controller.
Node
also
has
the
config
sync
as
a
component
and
confixing
is
basically
the
get
Ops
tool
here
and
config.
Sync's
job
is
to
sync
the
packages
from
the
gate
wrappers
to
which
it
is
pointing
to
and
ports
is,
the
port
is
the
manager
for
the
kpt
packages.
B
So
basically,
what
we
have
to
do
is
we
have
to
register
the
git
repositories
with
the
porch
and
ports
automatically
discovers
all
the
packages
that
are
present
in
these
git
repositories
and
ports.
Then
Provisions
the
users
to
basically
download
these
packages
perform
mutations
and
upload
these
packages
upload
the
curated
packages
to
the
Target
repos.
B
In
this
example,
there
is,
there
is
a
repo
associated
with
every
cluster
and
the
config
syncs
in
these
clusters
are
configured
to
sync:
the
packages
that
are
uploaded
to
these
to
their
corresponding
wrappers
and
then,
along
with
that,
we
have
infrarepo.
Infrarepo
is
supposed
to
contain
the
infrastructure
related
packages.
B
So
in
our
example,
the
infrared
repo
will
have
the
the
kpt
package,
which
has
the
cross
plain,
krm
objects
and,
and
then
there
is,
there
is
a
repo
which
is
which
has
in
a
few
packages
itself,
which
are
basically
these
the
controllers
that
we
see
the
config
sync
and
ports,
and
then
private
catalog
packages
contain
the
raw
packages
raw
workload
packages
yeah.
So
so
this
is.
This
slide
basically
describes
the
all
the
components
that
are
part
of
this
demo.
B
So
this
slide
shows
the
the
concept
of
intents
at
a
very
high
level
and
in
this
demonstration,
what
we
want
to
show
is
that
the
user
is
specifying
two
intents
two
high
level
intents
and
the
first
one
is
going
to
be
the
infrastructure
intent
followed
by
the
another
intent
that
is
going
to
be
the
workload
intent
in
the
inferent
and
all
the
the
user.
B
Basically
all
it
all,
the
user
needs
to
specify
is
the
properties
of
the
the
infra
that
that
he
wants
to
create
and
the
the
example
could
be:
the
zone
region
wavelength
zone,
for
example,
and
the
and
the
the
properties
like
it's
if
we
if
he
requires
a
GPU
in
the
Target
cluster
and
these
high
level
intents
are
then
understood
by
the
controllers,
the
infrastructure
controller
specifically
in
nephew,
and
it
will
perform
the
job
of
creating
this
infra
based
on
the
intent
that
is
specified.
B
Similarly,
for
the
workload
intent
at
a
high
level.
Only
two
pieces
of
information
is
what
is
required.
One
is
the
placement
intent
where
the
user
can
specify
the
reason
where
he
wants
to
deploy
the
workload
followed
by
the
source
source
repo.
So,
in
the
previous
slide
we
saw
the
catalog
repo
where
the
raw
packages
are
present.
So
user
has
to
specify
the
source
repo
so
that
the
controllers
could
download
the
the
packages
from
there
perform
mutations
and
upload
them
to
the
Target
repos
via
ports
yeah.
We
can
go
to
the
next
slide.
B
So
this
slide,
it
basically
shows
the
end-to-end
demo
so,
as
I
described
the
pink
boxes
on
the
left
most
of
the
screen.
So
user
starts
with
specifying
the
the
infrastructure,
orchestration
intent
and
that
is
specified
via
a
custom
resource
which
is
called
the
deployment
package
custom
resource.
B
It
will
perform
the
mutations
if
any
specified
in
the
placement
specified
in
the
in
front
end,
and
then
the
nephew
controller
is
going
to
upload
the
curated
package
to
the
to
the
infra
deployment
repo
and,
as
we
know
that
there
is
config
sync
present
in
the
nephew
controller
cluster
itself,
and
this
config
sync
is
basically
meant
to
sync:
the
infrastructure
related
packages
from
the
infra
deployment
repo
and
as
we
as
the
controller,
curates
and
uploads,
the
curated
package
to
the
infra
deployment
repo,
the
config
sync
will
automatically
in
the
back
end
starts
syncing
that
syncing
that
package
and
as
a
result
of
it,
what
will
happen
is
that
the
the
the
the
air
controllers
from
AWS,
they
will
come
into
action
and
they
will
start
creating
the
resources,
the
resources
which
will
basically
comprise
our
eks
cluster.
B
So
this
is
the
part
I
mean
it
covers
how
the
eks
cluster
orchestration
is
basically
automated,
but
that
is
not
enough
for
workload
orchestration.
We
have
to
prepare
these
clusters
for
workload
orchestration
and
we
have
automated
that
process
as
well,
and
that
is
the
job
of
the
nephew
infra
controller.
So
it's
a
POC
level
controller.
B
So
what
it
does
is
it
watches
the
eks
kubernetes
resource
in
the
nephew
controller
cluster
and
it
waits
for
this
this
resource
to
come
into
ready
state
and
once
it
comes
into
ready
state,
it
deploys
config
sync
in
the
Target
cluster
and
it
also
configures
the
config
sync
to
to
basically
point
to
its
corresponding
Azure
EPO.
B
So
that
is
the
that
is
the
minimum
preparation
that
is
required
in
order
to
prepare
in
order
to
prepare
these
clusters
for
handling
the
workload
deployment
of
the
workload.
So
this
was
the
complete
end-to-end
path
of
the
infrastructure,
Automation
and
then
comes
the
workload,
automation,
workload,
orchestration,
so
workload,
orchestration
user
specifies
the
workload
intent
and
so
basically,
two
two
pieces
of
information.
B
One
is
the
Source
repo
where
the
caching
DNS
raw
package
is
available
and
another
is
the
placement
intent
and
then
this
controller
will
basically
resolve
the
placement
intent
and
it
will
figure
out
that
it
has
to
push
this
package
to
the
edge
repo
which
is
associated
with
our
eks
cluster
and
the
based
on
the
kind
of
the
as
repo.
B
It
will
basically
configure
the
scaling
profile
in
the
caching
DNS
kpt
packages
and
the
the
the
curated
packages
will
be
basically
put
on
the
Azure
repo
and,
as
we
know
that
our
infra
controller
has
already
configured
the
config
sync
in
the
eks
cluster
to
pull
these
packages
from
The
Edge
repo.
A
A
B
So
this
is
how
the
yaml
looks
the
infra
infrastructure,
so
all
it
has
is
the
The
Source
repo
from
where
it
has
to
pull
the
the
eks
kpt
package,
and
it
says
that
the
package
type
is
infra
So.
Based
on
this,
the
controller
basically
differentiates
between
the
infra
and
the
workload
package.
B
B
So
we
will
list
the
packages
now.
So,
as
we
see
the
highlighted
package
here
is
the
one
is
the
there
is
the
one
that
is
pushed
by
the
controller
and
it
has
the
eks
krm
krm
objects.
So
this
package
is
in
proposed
state
and
we
are
going
to.
We
are
going
to
approve
it
manually
and
the
approved
packages
are
synced
by
the
config
sync.
B
B
So
this
is
the
workload
intent
and
workload
intent
at
a
high
level
has
for
this
example.
It
has
two
pieces
of
information.
One
is
the
information
related
to
the
placement
which
it
says
is
the
U.S
Central
one
and
another
is
the
source
repo
where
the
caching
DNS
package
is
present.
B
B
So
this
is
so
now.
What
is
the
link
between
the
placement
intent
that
is
specified
and
the
the
the
eks
cluster
that
the
infra
controller
has
created?
So
infra
controller
creates
cluster
profiles
and
cluster
cluster
profiles
are
the
they
basically
map?
What
is
this?
What
is
specified
in
the
infra
controller
to
the
Target
cluster?
So
this
is
how
the
cluster
profile
looks.
It
is
a
it
is
a
kubernetes
object
again
with
current
cluster,
so
yeah-
and
this
has
this
cluster
profile-
says
the
scaling
profile
has
a
small
small
fixed
size.
B
So
this
is
what
the
the
controller
will
read
and
then,
based
on
this,
the
controller
will
basically
muted
the
caching
DNS
package
and
and
configure
the
scaling
profile
in
the
package.
B
And
this
this
cluster
object
also
has
the
the
pointer
to
the
Target
repo
of
this
cluster
So.
Based
on
this
information,
the
controller
will
mutate
the
packages
and
push
them
to
the
to
the
Target
repo,
and
at
this
point
we
have
applied
the
object
and
this
would
have
basically,
the
controller
now
would
have
pushed
the
package
to
the
Target
repo.
B
So
similarly
yeah
so
yeah,
so,
as
was
the
case
for
the
infra
infra
infra
intent,
so
this
package
is
in
proposed
state
and
we
are
going
to
approve
it
manually.
B
So
this
process
takes
10-15
minutes
bringing
up
of
the
cluster
and
preparation.
So
I
would
just
go
to
the
end
of
this
video,
so
yeah
So.
Eventually
the
application
did
come
up
in
the
Target
cluster
and
yeah.
So
that
concludes
the
infrastructure
orchestration,
followed
by
the
workload
orchestration
by
specifying
only
two
intents,
with
the
help
of
the
nephew
nephew
controllers,
so
yeah.
So
that's
that's
about
it
with
the
demo.
A
Okay,
thanks
thanks
Sandeep
and
hopefully
our
viewers
got
a
a
good
idea
of
what
nephew
is
and
how
powerful
it
is
going
to
be
when
it
comes
to
Edge
workload
and
infra
orchestration
and
variety
of
segments,
starting
with
retail
Healthcare
v2x
manufacturing,
Logistics,
Telco
Etc.
A
So
we'll
end
on
our
final
slide
of
what's
next,
so
please
do
get
involved.
If
nephew
is
interesting
to
you,
please
join
us.
You
will
find
all
the
information
at
nephew.org
and
wiki.nephew.org.
You
can
watch
developer,
Day
videos
to
get
a
much
deeper
idea
of
what
nephew
is.
We
have
a
resource
for
executives.
A
So
if
there
are
managers
director
level,
people
VP
level,
people
who
want
to
understand
what
nephew
is
and
how
it
can
create
value
for
them,
then
please
use
our
white
paper
and,
finally,
at
any
point
in
time,
if
your
organization
is
looking
at
nephew,
feel
free
to
request
a
one-hour
workshop
with
us
and
we'll
be
happy
to
walk
you
through
nephew
in
in
more
detail
with
that.
Thank
you
and
have
a
wonderful
day.