►
From YouTube: Webinar: Running the next generation of cloud-native applications using Open Application Model (OAM)
Description
Kubernetes has quickly become the de facto standard for container orchestration. However, there is no standard way to develop and operate an application running on Kubernetes. The Open Application Model (OMA) defines a standard way to build and run cloud-native applications. In this webinar, we will highlight the latest development in the OAM realm with live demos.
Presenter:
Dr. Ryan Zhang, Staff Software Engineer @Alibaba Cloud
A
Okay,
let's
get
started
I'd
like
to
thank
everyone
for
joining
us
today.
Welcome
to
today's
webinar
running
the
next
generation
of
cloud
native
applications
using
open
application
model,
I'm
jerry,
fallon
and
I'll
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenter
today,
dr
ryan
zhang
staff
software
engineer
at
alibaba,
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
will
be
in
violation
of
the
code
of
conduct.
Please
be
respectful
of
all
of
your
fellow
participants
and
presenters
also
note
that
today's
recording
and
slides
will
be
available
later
today
on
the
cncf
website
at
cncf,
dot,
io,
slash
webinars.
B
Thank
you
jerry,
so
my
name
is
ryan
zan
and
I'm
going
to
present
tell
you
how
to
run
our
latest
endeavor
on
running
the
next
generation
of
cloud
native
applications
using
open
application
model.
B
So
here's
my
twitter
account
and
if
you
have
any
questions,
feel
free
to
dm
me
tweet
at
me,
so
today
I
hope
that
most
of
the
audience-
I
I
don't
know
how
many
out
there
are
familiar
with
what
is
kubernetes
and
if
you're,
not
it's.
Okay,
this
webinar
is
not
about
what
kubernetes
is
it's
mostly
about
what
kubernetes
is
not.
B
It's
it's
pretty
obvious,
but
let
me
lay
out
my
claim
a
little
bit
more.
So
I
think
again.
I
hope
I
don't
know
what
are
the
background
of
the
audience
like
I'm
a
developer,
but
mostly
at
heart,
like
I
like
to
write
code
instead
of
tweaking
like
node
or
fixing
some
hardware,
so
for
developers,
what
we
do
is
we
write
code
right
and
then
we
probably
test
it.
B
Hopefully
you
test
it
and
you
put
that
into
a
docker
image
to
deploy
it.
Thank
you
thank
you.
Docker
and
the
rest
is
kind
of
into
the
operation
hand
you
can
have
devops
but
you're
wearing
different
hats.
So
as
a
application
operator,
what
we
like
to
do
is
we
would
do
we
write,
rule-based
abstractions
right,
we
like
to
see
say,
depends
on
what
what
tools
you
have
you
want
to
say
how
to
scale
you
want
to
scale.
B
B
So
that's
how
we
do
application
development
and
application
operations,
but
what
does
kubernetes
provide
communities
provide
a
whole
bunch
of
abstractions
layered
at
sort
of
layered
at
the
infrastructure
layer,
but
not
quite
there
like
a
deployment
part
for
the
scaling,
you
can
have
hpa
and
not
also
other
things,
node
controller,
everything
that
is
quite
falling
to
any
developers
and
operators
and
also
in
the
levels
of
abstraction.
B
Instead
of
clear
rules,
you
have
a
whole
bunch
of
gamble
files
right.
You
have
ingress
service
sto,
whatever
all
these
names
like
you
can
do
a
lot
of
things
very
powerful,
but
on
the
first
side
it
doesn't
mean
anything
to
any
developer
or
operators
and,
lastly,
I
think
the
other
traditional
ones
developers
operators
like
to
use
some
kind
of
gui,
some
ci
or
maybe
maybe
even
some
code
to
to
the
deployment
like
old-fashioned
bash
scripts
right
that
that's
what
we
do
deployment
or
operations,
but
what
kubernetes
provides.
B
B
So
that's
what's
the
what
leads
to
the
current
solutions
right,
so
most
companies
in
a
team
that
I
know
they
set
out
to
build
their
own
application
platforms
to
cater
towards
their
developers.
B
Kubernetes
is
actually
a
excellent
platform
for
that,
because
kubernetes
is
kind
of
built
to
support
a
building
different
platforms
on
top
of
it.
So
what
here?
Here's
a
pattern
right,
basically,
here's
what
I
observed.
Oh
we
observed
as
a
pattern
for
those
platform
builders
on
top
of
kubernetes.
They
basically
use
if
you
can
see
my
cursor
right.
So
this
is
the
infrastructure.
This
is
the
the
heavy
lifting
here.
So
there
are
tons
of
different
various
controllers
that
they
write,
customized
controllers
and
which
make
which
interact
with
the
real
kubernetes
capabilities.
B
Here,
like
the
containers
database,
these
will
be
imported
or
deployed
onto
kubernetes
and
then
then,
that
in
turn
will
get
into
a
part,
deployment
or
service
exposed
to
through
the
application
platform.
That
platform
will
will
not
expose
those
deployment
of
parts
directly.
Instead,
it
will
expose
different
abstract
concepts
or
maybe,
in
normal
cases,
like
a
rest
api.
So
in
the
rest,
api
you
will
create
different
rest
apis
that
say
for
developer
side.
You
have
ci
cds,
apis.
B
You
have
source
to
image
apis
you,
given
the
resource
top
top
or
or
some
I
mean
some
source
address,
it
will
create
a
docker
file
for
you
basically
and
for
operations.
You
also
have
a
whole
bunch
of
logging
monitoring,
layout
capabilities
and,
through
maybe
rest
api,
maybe
even
higher
level,
with
the
gui
or
cr
ci.
That's
that's
how
the
I
think
most
of
the
teams
and
companies
do
now
so
it's
kind
of
fragmented
and
also
another
thing
is
it's
very
targeted.
B
So,
for
instance,
maybe
this
is
for
say
for
a
function
as
a
service
or
fast
type
of
workloads.
So
the
interface
will
be
really
catered
towards
those,
and
you
can
do
other
things
like
hpc.
You
can
do
ai,
you
can
do
iot,
so
each
company
or
team
is
building
their
own
specialized
platform
right
so
that
that
works
right.
That
sort
of
works.
The
only
thing
is:
let's
let
me
get
to
the
other
side.
The
the
only
thing
is
there
are
two
two
problems:
one
is
you
to
build
this
customized
solution.
B
You
are
actually
becomes
a
one-off
solution
right
whenever
your
developers
need
something
you're
going
to
write,
maybe
another
controller
or
at
least
need
to
adjust
your
interface
and
then
a
whole
bunch
of
changes.
And
another
thing
is:
we
know
that
the
reason
kubernetes
is
so
popular
and
so
powerful
is.
It
has
a
huge
capabilities
in
its
ecosystem.
B
It's
not
the
community
itself.
That
is
super
powerful.
It
is,
but
the
the
real
power
of
kubernetes
is
in
its
ecosystem.
You
have
you
know,
basically,
whatever
you
want,
you
name
it.
It
has
it.
So
that's
the
capabilities,
but
when
you
are
building
this
customized
customized
application
platform,
you
kind
of
lose
that
capabilities,
because
you
have
this
abstraction
and
specified
abstraction,
it's
kind
of
building
a
private
cloud.
So
you
have
a
very
powerful
public
cloud
and
then
you
narrow
it
down
to
just
cater
towards
your
specific
customers.
B
B
You
want
to
do
open
fast
or,
as
I
mentioned,
you
can
do
ai
in
will
be
just
for
in
interference
and
on
the
other
side,
you
will
also
want
to
leverage
all
the
kubernetes
extensibilities
from
a
platform.
Standpoint
right.
You
don't
want
to
lose.
You
do.
Basically
you
do.
You
do
not
want
to
reinvent
the
wheel.
It's
already
there.
Everything
I
mentioned
here:
service
mesh,
auto
scaling
canary
deployment,
you
name
it
it's
there,
so
you
don't.
You
do
not
really
want
to
reinvent
the
wheel.
So
what
do
you
do
right?
B
So
here's
what
we
come
up
with.
So
when
we
come
up
with
an
open
application
model,
I
think
microsoft
and
alibaba
jointly
announced
it
sometime
last
year
october.
I
think-
and
we
are
still
working
on
that.
We
it's
been
it's
been
about
a
year
and
we've
made
a
lot
of
progress.
So
what
is
oem
the
oem,
the
different
ways
to
see
it
from
the
platform
builders
point
of
view.
B
You
can
see
that
as
the
abstraction
that
is
allows
platform
builders
to
build
developer
friendly,
highly
extensible
application
platforms,
so
we
that
is
really
what
we
think
is
the
competitive
edge
or
the
core
of
this
om
model
that
can
help
the
platform
builders.
It
will
make
them
like
their
life
a
lot
easier.
B
Let
me
get
into
detail
a
little
bit,
so
this
is
the
standard
we
set
out
to
design
for
the
platform
builders,
so
that
provides
the
building
block
and
framework
to
create
app-centric
platforms.
B
What
does
it
allow
is
for
the
platf
platform
builders
to
allow
their
application
developers
to
bring
their
own
workloads
instead
of
say
that
the
application,
the
the
platform
builders
develop
their
own
workloads,
as
I
mentioned,
like
iot
or
ai,
or
things
they
allow.
Of
course,
they
still
can
do
that,
but
they
allow
their
users
to
bring
their
own
workloads
to
make
it
very
extensible.
That's
where
the
extensibility
comes,
and
also
it
naturally
comes
with
a
trade
system.
B
B
So,
instead
of
you
just
have,
there
are
actually
several
different
kubernetes
based
platforms
out
there
that
allow
you
to
deploy
a
static
workload
alone.
You
can
you
can
write
your
code,
it
probably
will
build
it
for
you,
get
a
docker
and
and
deploy
it
into
a
kubernetes
cluster
for
you.
But
that's
not
enough
right.
We
all
know
that
during
the
especially
in
real
production
scenarios,
you
have
a
lot
of
operations.
You
need
to
do
deployment
rollout
traffic
shifting.
B
You
know
name
a
few
for
tolerance,
so
magic's
logging
right
all
these,
so
the
trade
system
is
allows
you
to
do
that.
It
has
a
discoverable
capability
system
so
that
users
can
find
out
or
application
developers
and
the
operators
can
find
out
what
kind
of
trades
this
service
this
platform
has.
So
it
can
clearly
find
out
which
one
and
apply
that
also
those
traits
are
not
really
reinvent.
Will
those
traits
can
leverage
the
existing
cloud
native
ecosystem
fully
leverage
that
so
you?
B
So
that
is
another
very
important
feature
of
this
platform
and
finally,
given
that
you
can
see
that
you
can
have
the
platform,
builders
can
strike
a
right
balance
between
the
extent,
extensibility
and
abstraction
right.
So
with
that
you
can
pick
which
one
you
want.
If
you
want
abstraction,
you
can
build
some
customized
ones.
If
you
want
extensibility,
you
can
allow
any
user
to
bring
their
own
workloads.
B
And
finally,
this
is
naturally
kind
of
as
a
byproduct,
it's
naturally
geared
towards
a
past
or
service
platform,
because
because
we
have
this
trade
system-
and
we
have
this
concept
of
separation
of
concerns
and
in
om
term,
we
call
it
team-centric,
which
basically
means
different
roles.
As
I
mentioned
before,
we
have
developers,
we
have
operators
and
we
actually
have
also
have
infrastructure
teams.
The
different
roles
have
a
very
clear
boundary
between
them,
so
that
that
workflow
endorses,
like
kind
of
light
up
or
no
op.
B
With
light
up,
it's
more
like
passive
knob,
you
can
call
it
service,
but
if
surplus
is
a
it's
a
buzzword,
so
it's
kind
of
no
op
and
by
nature,
so
the
application
developers
probably
can
just
apply
some
trades
and
then
call
it's
done.
B
So
let
me
get
into
a
little
bit
detail
on
what
does
that
mean?
So
first,
let
me
talk
about
the
oem
platform.
Architect
right,
that's
that's!
What
we
envisioned
so
in
the
center
of
most
of
the
oem
platform
on
base
platform
is
the
oem
runtime.
B
B
Cursor
anyway,
the
cursor-
you
can
see
that
the
application
configuration
controller
is
what
the
core
part
of
the
controller
it
will
control
the
application
and
the
component
component,
as
I
was
getting
into
details
later
on,
introduce
what
exactly
that
is,
but
you
can
think
of
application.
Configuration
is
the
on
representation
of
application
on
kubernetes
or
in
any
other
place.
B
B
We
think
that
is
the
core
capabilities
they're,
both
workloads
and
traits
such
as
containerized
workloads
and
manual
scalars.
Those
are
just
really
basic
ones.
We
did
we
try
not
to
include
too
many
workloads
or
there,
because
it's
extensible
so
so
with
this.
This
is
the
the
core
part
of
the
oem
runtime.
But
then
the
real
work
is
for
the
platform
builders
up
to
to
extend
those
om
x
capabilities.
B
That's
where
the
power
of
oem
comes.
It's
this
part,
because,
because
the
whole
infrastructure
is
so
extensible
and
pluggable,
the
platform
builders
can
pick
their
whatever
their
workload.
They
want
to
support.
They
can
add
any
capabilities
they
want,
like
logging,
monitoring,
scaling
trafficking,
ci
cd,
alerting
anything
you
want,
you
can
add
these
and
security
right.
Anything
you
want.
B
So
speaking
of
workflow
right
here,
here's
the
like
general
workflow
of
general
workflow
of
how
our
app
is
running
on
this
platform.
B
First,
let's
start
with
the
basics,
right
developers
will
write
code.
That's
what
developers
do
developers
write
code
and
combine
that
with
workload?
Workload
is
kind
of
we're
we're
going
to
get
that
some
detail
later,
but
you
can
think
of
workload
as
a
template
that
the
platform
builders
provide
to
the
developers
say
say
you:
can
it's
it's
kind
of
how
services
run
or
how
service
is
is
or
a
code
is
packaging,
a
format
there.
B
You
can
think,
oh,
how
it
runs
something
like
that,
so
the
developers
write
their
code
and
combine
with
a
workload
then
we'll
create
a
component.
That
component
has
the
characteristics
of
the
workload,
but
it
runs
the
code
that
developerworks
wrote
right
and
then
then
pretty
much
the
the
developers
work
is
done.
You
write
the
code,
you
test
it.
Of
course
again.
I
hope
you
test
it
thoroughly
and
then
you
have
the
application
operators
coming
in
the
application
operators.
B
They
mostly
defines
the
operation
aspect
of
a
service
which,
as
I
mentioned
before
in
the
om
world,
we
call
them
traits
so
they're,
again
different
traits
and
then
finally,
the
application
operators
put
the
traits
on
top
of
components
and
then
create
this
application
configuration.
That
is
again
our
om
definition
of
application.
It's
the
same
on
naom
platform,
any
om
based
platform
would
accept
application
configuration
file
and
then
get
it
to
run.
And
finally,
you
submit
that
to
the
oem
runtime
in
the.
In
the
meantime,
the
infrastructure
operators
will
have
to
maintain.
B
They
will
do
that
they
they're
doing
their
job
just
like
anywhere
else.
They
need
to
maintain
their
either
their
kubernetes
or
their
cloud
resources
just
make
sure
everything
is
working,
hold
some
underneath
and
then
the
developers
is
happy
and
the
operator
is
happy.
That's
that's
that's
the
main
idea
and
the
workflow.
B
B
However,
it
doesn't
mean
that
om
is
kubernetes
only
model,
it
actually
can
apply
to
non-kubernetes
environment,
and
we
have
implementations
some
at
least
I
know
of
some
community
members
are
working
on
some
implementations,
not
on
the
kubernetes,
but
the
the
concept
is
that
is,
the
resources
is
basically
represented
in
a
more
like
a
kubernetes
way.
You
know
in
most
sciences
has
the
gbk,
but
it's
not
that
that
is
actually
pretty
generic
right.
You
can
have
gvkn
anyway,
it's
just
a
unique
identifier
and
the
yamaha.
Those
are
pretty
much
generic.
B
So
now,
let's
start
with
the
most
developer
facing
resourcing
om,
we
call
it
a
component.
B
What
is
component
component
is
essentially
essentially
instance
for
your
workload,
so
you
can
see
that
we
I
mentioned
that
the
platform
builders
provides
the
workload
template
and
then
the
developers
basically
instantiate
that
and
it's
actually
a
version
merchandise.
So
if
you
have
different
versions,
we
will
create
different
revisions
for
you.
For
example,
if
you
look
at
the
right
side,
so
here's
the
workload
right,
here's
the
components.
What
is
the
component
component
spec?
B
Basically,
basically
just
contained
one
workload
spec,
and
this
this
spec
is
what
the
developer
the
application
platform
provides,
and
the
developer
basically
need
just
need
to
fill
in
those
details
and,
for
example,
like
the
image
environment
right.
These
are
the
details.
You
are
not
feeling
and
if
you
change
the
components
say
you
change
the
image
name.
The
om
runtime
will
automatically
create
a
version
instance
of
that
component.
So
you
see
that
you
have
different
components
instantly
so
so
it
has
a
history
there.
So
this
is
definitely
a
app
developer
facing
object.
B
And
then,
since
we
mentioned
workloads,
you
may
ask
why:
why
do
you
want
to
have
this
kind
of
two-layered
structure
instead
of
just
provide
workloads
alone?
So
there
are
many
reasons.
B
One
of
the
reason
is
it's
a
way.
As
I
mentioned,
the
workload
definition
of
workload
templates
is
a
way
for
an
infrastructure
operator
to
define
what
components
are
available
to
application
developers.
For
example,
we
can
see
the
right
side
if
the
the
platform
platform
want
to
expose
more
detailed,
detailed
workloads
to
the
developers
it
can
be,
it
can
choose
to
expose,
say
deployment,
the
native
kubernetes
deployment,
that's
the
that's
where
the
owen
powerful
comes.
B
B
We
would
like
the
the
platform
to
only
expose
what
the
com,
what
the
developers
actually
care,
say,
a
containerized
workload
and
all
what
it
has
is
only
a
image
and
maybe
a
replica-
that's
about
it,
because
we
all
know
that
in
deployment
there
are
tons
of
things
that
most
developers
don't
care,
node
affinity,
tolerance,
preemption
policy,
all
these
things
developers
don't
actually
care,
so
it
will
be
difficult
for
them
to
fill
out
all
these
fields.
But
if
a
developer
look
at
the
right
side,
it's
crystal
clear,
at
least
for
me.
B
B
In
here
we
can
see
that
the
the
way
to
apply
trace
to
components
is
like
this.
So
in
your
application
configuration
files,
you
you
specify
the
component
name,
which
we
mentioned.
The
component
name
is
the.
If
we
go
back
it's
here
right
here
and
then
you
can
apply
our
traits,
you
can
apply
a
bunch
of
traits
here.
As
many
as
you
want.
I
mean
we
don't
have
a
limit.
Yet
we
haven't
seen
anyone
hit
the
limit.
B
Different
replicas
depends
on
its
size
depending
on
the
load
and
you,
of
course
you
want
to
have
an
api
gateway
or
ingress,
basically,
so
that
you
can
have
a
hostname
and
a
port,
and
you
can
put
multiple
components
into
this
application
and
we
know
we
have
front
end.
You
know
you
can
have
a
database,
you
probably
will
have
a
another
backhand
and
and
more
different
back
ends.
So
it's
it's
a
totally
microservice.
B
Microservice
architect,
right
so-
and
this
is
mostly
facing
the
app
operators-
the
apple
operators
will
assemble
this
one,
but
probably
also
needs
some
input
from
the
developer,
sound
who's
going
to
talk
to
who
so
the
application.
This
again,
this
is
how
oem
represent
the
application.
B
B
We
want
the
platform
to
offer
the
register
and
discovery
capabilities,
because
that
is
also
very
important.
That's
where
the
definition
comes
so
remember.
I
mentioned
that
we
will
have
this
workload
definition,
the
workload
definition
and
the
trade
definition
are
not
only
there
just
to
have
a
have
an
envelope
around
that,
but
it
also
provides
this
critical
discover,
abilities
and
description
abilities
for
the
for
the
oem,
for
example.
Here
you
can
see
that
for
trades
definition
that
will
have
a
lot
of
metadata.
B
It's
not
like
this
method
is
like
in
the
spec
you
have
metadata,
it
will
say
what
kind
of
workload
it
applies
to,
what
kind
of
trades
it's
conflict
with,
and
this
is
just
pointing
to
the
definition
and
then
you
can
add
more
for
every
different
platform
builders.
They
can
choose
to
add
it's
extensible,
so
you
can
choose
to
add
more
fields
here
and
which
can
mean
different
things
so
that
it
provides
the
way
to
have
this
capability
and
the
capability
register
and
discovery
center
there.
B
It's
not
officially
part
of
the
om
spec
yet,
but
we
envision
that
it
to
be
an
essential
part
of
any
real
om
based
platform
and
the
easy
way
to
showcase
the
powerness
the
powerfulness
is
here
is
just
if
you
do
a
could
be,
could
because
get
traits.
You
will
see
its
definition.
You
will
see
its
name,
so
you
see.
Oh,
this
is
the
traffic
trait.
So
if
you
want
to
have
a
traffic,
you
apply
that
to
your
components
and
you
have
route,
you
have
cert.
B
So
if
you
want
something
you
can
find
out
from
your
platform,
builders,
a
registry
and
then
just
assemble-
that
into
a
component-
that's
how
this
that's,
how
we
envision
this
whole
system
works.
B
Okay,
okay,
so,
demo
time
right,
so
we
are
in
the
middle
of
building
a
platform.
So
we
already
have.
Let
me
go
back
to
the
architect
a
little
bit
right,
oops
too
much
too
much
yeah
okay.
So
we
already
have
this
core
controller
pretty
much
built
up.
It's
called.
It's
called
a
crossplane
that
it's
part
of
a
crossplan
project
and
it
is
now
an
officially
almost,
I
think,
officially
a
cncf
sandbox
project.
So
you
are
welcome
to
try,
give
it
a
try
and
contribute,
and
crossplan
has
more
than
just
applications.
B
It
also
handles
resources
which
is
super
important
or
it's
critical
for
any
real
applications.
So
you
can
use
the
application
model
and
you
can
have
the
resource
model
and
then
it's
seamlessly
integrated
with
different
cloud
providers.
So
you
can
run
your
applications
on
different,
basically
across
different
cloud
providers
and
with
the
same
interface
so
that
that's
the
power
of
the
cross
plane.
So
now
what
we
are
trying
to
do
is,
as
I
mentioned,
we
have
this
we're
trying
to
bring
it
out
to
the
rest
right.
B
We
want
to
build
out
more
capabilities
here
and
we
want
to
build
out
more
uis
get
ups.
All
these
make
it
a
full-fledged
application
platform
that
that's
what
we're
trying
to
do,
so,
I'm
in
the
middle
of
that.
So
let
me
show
a
very
simple
demo
right,
so
what
I'm
going
to
show?
You
is
so
this
is
still
working
progress.
What
I'm
going
to
show
you
is
give
you
some
concrete
idea
of
what
exactly
an
oem
application
looks
like
okay
java
files,
but
believe
me,
when
the
the
product
is
done.
B
Most
users
are
not
going
to
really
get
into
yaml
files.
That's
that's
not
what
we
want,
and
we
mentioned,
we
need
the
ui
and
the
the
ci
files.
But
before
that,
let's
see
so
we
have
some
definition
files.
We
already
installed
our
platform
there.
So
all
we
want
to
do
is
just
give
this
application
files.
So
before
that
we
have
definitions,
we
have
workload
definitions
and
we
have
trade
advantage,
trade
definitions.
We
have
this
service,
we
actually
have
more
than
that.
We.
A
B
What
this
so,
this
is
our
real
definition
here
you
can
see
that
we
have
a
we
created,
one
called
containerize
workload,
so
that's
the
containerized,
vocal
definition.
You
can
see
that
it.
It
points
to
a
different
crd.
So
for
anyone
who
are
familiar
with
kubernetes,
this
is
basically
crd.
It's
pointing
to
a
crd.
B
It
has
some
star
resources
and
we
have
an
extension.
That's
what
I
mentioned,
that
each
platform
builder
can
choose
to
add
more
stuff
into
their
workflow
definitions.
So
we
have
this.
We
also
have
matrix,
also
it
created
and
another
trait
that
is
called
magic
tricks,
so
magic
traits
would
apply
to
a
whole
bunch
of
different
workloads,
definitions
and-
and
what
does
the
magistrate
do
is?
Let
me
show
you
an
example
here,
so,
okay,
so
let
me
first
go
back
to
the
app
component
site
so
component,
as
I
mentioned,
we
can.
B
The
platform
can
choose
to
support
either
low-level
more
detailed
workloads
like
a
deployment.
So
you
can
see
that
we
have
a
component.
It's
called
auto
scale
application,
basically
that
application
exposed
one
matrix
called
forget
its
name
like
number
of
requests
per
second
or
something
like
that
request
count.
I
think
yeah,
it's
request
come
something
like
that,
and
it
has
this
deployment,
so
the
user
or
developer
has
to
fill
in
all
these
things.
Labels,
labels,
labels
label,
select
labels,
or
you
can
try
to
do
this.
Containerized.
B
Containerized
workload,
which
only
has
a
few
fields
to
to
here,
so
you
have
replica
and
containers,
you
have
image
port
and
that's
it.
So
we
we
have
these
two
different
components
in
the
same
application,
and
this
this
is
is
a
this
matrix
component,
basically
emits
some
simulated
stats
instead
of
real
stats.
B
And
now,
let's
look
at
the
application.
Configuration
which
is
the
application,
basically,
is
how
the
application
is
yeah.
So
you
can
see
that
it
has
two
components,
one
component
and
another
one.
So
one
is
auto,
auto
scale,
one
is
a
matrix
and
we
apply
some
traits
to
do
both
of
them
here.
We
we
both
apply
the
metric
to
it
so
for
the
magic
street.
B
What
does
that
mean?
Is?
It
will
add
ex
basically
add
a
parameters,
exposure
to
scrap
those
stats
so
that
for
hpa
for
for
mod
alerting
for
some
other
metrics
work,
you
can
look
into
the
promises
that
expose
this
all
the
stats,
for
example,
for
this
auto
scale,
one
as
I
mentioned
it
exposed
one
called
a
request
per
second
or
something
like
that
request
count
so
that
you
can
go
through
the
promises
and
look
at
the
query
that
and
of
course,
if
you
set
up
the
grafana,
then
you
can
see
the
graph.
B
Actually
we
have
that
sort
of
setup,
but
just
for
the
sake
of
this
demo,
I'm
not
going
to
show
that.
So
now,
let's
try
to
do
a
live
demo
right
so.
B
All
right,
so,
basically,
I'm
going
to
copy
apply
this
demo
into
this
file
by
the
way
we
I'm
using
a
mini
copy
on
my
desktop-
and
I
am
I've
already
set
up
the
the
platform
controllers
so
now
we
just
need
to
apply
this
demo
or
apply
this
application
to
that,
and
let
me
see.
B
A
A
B
See
if
yeah,
let's
restart
this,
for
me,
this
maybe
there's
some
remnants
from
the
previous
demo
previous.
A
B
So
it's
deleted
and
then
we
can
see
that
it's
not
going
to
show
anything
right,
so
it
will
come
back
and
because
the
prometheus
is
basically.
B
B
I
can
fix
that,
but
believe
me,
it's
it's
a
it
will
work.
Okay,
so
that
that's
the
demo
time.
Let
me
go
back
to
the
slides,
so
just
to
recap
right,
so
the
recap
is
for
application
platform.
Architecture
is
basically
what
I
just
showed
you.
So
just
let
me
recap
what
what
I
just
showed
you.
What
we
have
is
all
the
one
of
the
capabilities
say
the
metrics:
that's
the
capability,
I'm
trying
to
show
off,
but
unfortunately
there
are
some
reminiscent
remnants
of
the
previous
run.
B
That
is
not
cleaned
up
correctly,
but
so
so
the
this
is
just
I'm
just
showing
k
showcase
one
of
the
capabilities
here
and
the
container
is
the
workload
I
showed
off.
You
have
containers
you
have
metrics
and
then
you
can
expose
that
to
different
application
profiles.
B
You
can
combine
that
say
that
the
one
that
we
just,
I
just
showed
it's
kind
of
like
a
website,
but
it's
not
really,
but
you
can
imagine
that
you
can
build
another
layer
on
top
of
that
to
redicate
it
towards
either
website
or
iot
whatever.
That
is,
and
that
it
will
make
this
make
the
work
life
of
developers
so
easy
on
that
layer.
But
there
are
some
hard
work.
Plumbing
needs
to
be
done.
Just
as
you
see
our
the
platform.
B
Builders
have
to
applying
all
these
things
correctly
and
then
the
top
applications
would
work
okay.
Finally,
let's
let
me
introduce
a
little
bit
about
communities,
so,
as
I
mentioned,
oem
is
jointly
announced
between
alibaba
cloud
and
in
microsoft
and
last
year
and
in
the
meantime,
we
both
joined
force
with
upper
bound
and
into
this
cross-plan
project
that
becomes
it
kind
of
adopted,
om
as
a
standard
application
management
across
module
cloud.
And
after
that
we
have
some
internal.
B
Some
external
adoptions,
like
in
china
we
have
tencent
and
for
paradigm
paradigm
is
ai
management.
Is
a
it's
an
ai
company.
It's
basically
produce
all
these
goofy
funny
pictures
for
their
customers,
so
it
used
the
lm
to
deploy
and
manage
those
machine
learning
applications
that
that's
where
the
air
comes.
B
A
B
Kubernetes
configuration
management,
oh
first
of
all,
as
I
mentioned,
oem
is
not
kubernetes
only
standard
or
spec,
so
the
idea
of
om
is
goes
beyond
kubernetes
and
just
in
kubernetes,
I'm
not
100
sure
how?
How
do
you
do
what
I
just
showed
to
have
this
separation
of
concerns
and,
for
example,
like
develop
deployment
where
deployment
is
actually
every
pretty
much
all
of
the
kubernetes
native
objects.
B
Have
the
developers
and
operators
combined,
you
name
it
right.
The
development,
the
deployments
stable
sets
right
the
jobs
they
all
have
a
ton
of
operation
fields
buried
inside
that
the
spec.
I'm
not
entirely
sure,
I'm
not
familiar
with
what
whatever
the
configuration
you
mentioned,
but
I'm
not
sure
how
you
can
separate
that
out,
because
it's
kind
of
built
in
natively
into
kubernetes
yeah.
B
Maybe
you
can
ping
me
offline
and
see
how
that
works,
but
I
kind
of
hard
to
envision
that
when
the
whole
thing
most
of
the
native
resources
are
already
combined
version
there
you
have
one.
One
more
thing
is
just:
if
you
remember
all
these
slide
links
you
can
pop
into
our
slack
channel,
ask
questions
and
you
can
give
a
try
or
for
these
repos
and
and
join
our
community
meetings
every
other
monday
at
10
a.m.
B
B
That's
a
good
question
yeah,
so
so
that
depends
on
what
do
you
mean
by
business
right?
There
are
different
businesses.
I
assume
that
business
probably
is
is
like
the
really
high
level
business
say
see.
B
It
has
nothing
to
do
with
infrastructure
or
not
not
a
cloud
business
right
so
for
any
other
businesses,
the
value
is,
you
don't
have
to
maintain
a
dedicated
infrastructure
team
right
to,
as
I
mentioned
in
the
beginning,
many
companies
now
actually
have
a
dedicated
infrastructure
team
that
needs
to
maintain
their
infrastructure,
maintain
their
platform
and
have
to
do
customized,
customized
change,
to
support
their
engineers
and
with
oem
it's
a
plugging
model.
You
can
it's
about
very
lightweight,
you
can
have
say
we
have
this
plat
oem
platform
that
I
mentioned.
B
B
Here
right,
if
you
have
this
one,
all
you
need
is
the
the
key
thing
is
these
are
all
plugging
model,
so
you
do
not
have
to
you
have
a
team.
This
is
already
done
say
you
have
a
community
and,
of
course,
it's
open
source.
You
have
a
community
built
platform
like
this,
and
if
your
team,
your
company,
needs
some
specific
ones,
special
ones,
all
you
need
to
do
is
find
out
in
the
community
what
you
you
can
get
those
capabilities
from
the
community
or
from
the
kubernetes
ecosystem
directly.
B
So
it
will
be
a
very
lightweight
operations
and
it
makes
your
developers
work
much
more
efficient
so
by
business.
I
assume
that,
even
if
you
are
say
walmart
whatever
right,
you
still
have
a
dedicated
I.t
team
or
developer.
That
is
doing
if
you,
if
all
your
all
your
business
is
none
digital.
Then
then
it's
a
different
story.
I
assume
you
have
this:
have
this
sort
of
digital
developers
made
there?
Then
using
this
oem
model
you
can.
B
You
can
basically
have
a
right
working
out
of
the
box
platform
and
it's
very
customizable
and
easy
to
improve.
You
can
get
all
the
benefits
from
the
community,
that's
actually
a
very
good
question
and
we
envision
that
in
the
future.
If
we
have,
if
the
community
continues
to
grow,
you
basically
can
any
company
or
team
can
build
up
their
own
platform
within
a
day
or
something,
and
you
don't
need
a
dedicated
engineers
infrastructure
engineers
to
fix
all
these
things.
You
see
my
demo
right
there.
B
Wow,
I
think
the
application
definition
does
I'm
not
a
hundred
percent
familiar
with
that.
But
my
understanding
is,
it's
basically
a
bunch
of
crds
glued
together
that
that
is
not
what
we
are.
That's,
that's
not
just
what
we
are
proposing
here
for
especially
it's
it's
this
part.
If
you
look
at
our
trades
system
right,
so
we
have
this
here.
B
So
if
you
just
have
that's
what
I
mentioned,
I
think
in
the
beginning
that
there
are
different
platforms
already
provide
that
if
you
take
get
rid
of
all
the
traits,
that's
pretty
much
the
same
as
those
platforms
and
and
probably
the
application
definition
there
is.
You
have
just
have
a
bunch
of
components.
The
ids
come
together
and
you
can
apply
to
the
kubernetes
and
it
will
work,
but
it
lacks
the
trade
lacks
of
the
operational
capabilities.
B
That's,
I
think,
is
the
most
important
part
of
the
om
spec
itself
that
it
gives
you
the
ability
to
specify
different
and
interact
different
from
the
operation
to
the
component
itself,
and
on
top
of
that
again
I
have
to
go
back
to
this
tree
the
whole,
the
kubernetes
crd
group,
something
like
that.
It
it
only
it's
only
just
this
part,
it
doesn't
have
this.
It
doesn't
have
that,
so
it's
just
a
static
way
to
group
a
bunch
of
crds
together,
I
think
om
has.
B
B
Edge
devices-
okay,
good
so
for
edge
devices.
Let
me
see
so
one
thing.
The
first
thing
is,
as
I
mentioned
earlier,
kubernetes,
the
oem
is
not
a
kubernetes,
only
spec
that
when
we
started
the
spec,
we
definitely
envisioned
the
oem
to
run
honest
on
the
edge
too,
so
that
you
one
thing
out
of
rim
of
kubernetes,
you
can
have
om
platform
actually
running
on
edge.
That
is
a
lot.
More
lightweight
only
provides
limited
capabilities,
but
but
the
key
thing
is
here:
the
application
configuration
is
the
same.
B
B
Some
some
om
supported
platforms
and
if
it
is
going
back
to
just
in
the
kubernetes
world
right
in
a
kubernetes
world,
there
are,
as
far
as
I
know,
there
are
a
bunch
of
edge
solutions
there
and
the
the
key
thing
is:
the
om
is
not
tied
into
any
of
these
solutions,
so
you
can
combine
the
oem
model
with
the.
I
think.
B
There's
kobe
edge,
there's
young
and
there
may
be
something
else
more
and
you
can
still
make
it
work
because
all
you
really
need
is
to
add
in
your
capabilities
here,
so
you
can
say
import
the
kubernetes
edge
here
and
then,
when
you
apply
the
the
application
into
the
controller,
this
workload
can
go
through
the
edge
controller
into
the
edge,
so
that
also
works
in
your
kubernetes.
You
know
kubernetes
world
so
that
that
can
be
can
be
done
and
again.
We
are
very
interested
in
this
scenario
because
I
didn't
mention
edge.
B
So
in
beta
actually
to
be
technically
correct,
we
are
still
in
alpha
a
in
alpha
stage,
so
the
spec
itself
is
in
alpha.
Why
is
it
in
alpha?
Is
it's
just?
We
need
more
feedbacks.
B
Another
thing
is,
we
are
getting
very
close
to
beta,
so
it
will
get
to
the
stable
stage
and
again,
if
the
more
community
feedbacks
we
get
the
faster,
we
can
move
into
beta
and
maybe
eventually
get
into
v1
stage.
So
you
please
come
and
comment
and
give
a
try
and
we
will.
We
can
go
move
forward
faster,
but
I
fully
envisioned
that
we
can
get
to
beta
stage
like
sometime
this
this
month.
That's
that's
the
plan.
B
Abstract
or
interface,
I'm
not
a
hundred
percent
sure
what
does
the
what's
the
difference,
abstract
or
interface.
So
one
thing
is
for
sure:
it's
not
just
for
containerized.
You
can
provide
any
type
of
abstraction
in
om
for
the
workload.
So,
as
I
mentioned,
you
can
have
servlets
like
fast,
but
you
can
totally
have
a
fast
workload,
which
you
only
provide
say
the
function
name
or
entry
point,
and
that
that's
it
that
that's
that's
that's
a
fast
workload
you
can
have
vm.
B
Maybe
you,
the
vm
workload
would
have
just
have
a
dm
image
and
the
same
thing,
entry
point
and
job
right.
So
it's
it's
definitely
not
just
for
containers
and
in
terms
of
whether
it's
a
interface
or
or
abstraction.
B
I
think
it's
more
like
a
abstraction
than
an
interface,
but
but
it's
kind
of
semantics
it's
hard
to
get
into
so
so
you,
the
abstraction.
When
I
say
it's,
an
extra
attraction
is,
as
I
mentioned,
you
can
just
abstract
out
many
other
things
into
which
exists
in
kubernetes
nowadays,
right
even
in
a
kubernetes
part,
you
have
different
containers,
initial
containers,
ethmo
containers
and
all
different
things
you
can
choose
to
pick.
B
Keep
them,
as
I
mentioned
in
the
you,
know,
demo,
you
can
keep
the
whole
deployment
or
you
can
keep
just
a
few
containers
or
you
can
just
have
one
container
say
you
don't
need
all
these
things.
Just
one
container
I
mean
yeah,
so
so
so
it's
I
would
say
it's
probably
an
abstraction.
B
Oh
yeah,
so
that's
that
we
get
this
a
lot
because
they
all
kind
of
come
out
of
the
microsoft
cto
office.
The
short
answer
is
they
complement
each
other,
so
oem
is
more
on
the
infrastructure
side
or
on
the.
B
How
do
I
say
that?
How
do
you
separate
your
separation
of
concerns
that
how
do
you
lay
out
this
application
and
a
development
workflows
and
darpa
is
mostly
just
a
paradigm
of
how
do
you
write
apple
applications?
So
I'm
we're
on
the
right
picture,
so
the
data
is
mostly
here,
so
you
can
say
om.
I
never
mentioned
how
you
write
code.
There's
no
change
on
developer
way
to
write
code,
so
you
still
write
your
own
code.
B
Whatever
way
you
want,
and
it
doesn't
matter
if
it's
you
know,
java
c
plus
plus
go
python,
it
works
and
that
and
the
the
om
comes
kicks
in
pretty
much
after
the
code
is
done
and
then
you
have
this
developer
create
all
these
darpa
is
mostly
here.
Docker
is
a
way
for
you
to
for
developers
to
write,
distributed
computing
system
and
and
utilize
all
these
common
middle
middle
middleware
in
darpa.
So
so
that's
actually
really
a
good
question.
Then
they
complement
each
other
right.
We
are
we.
We
know.
B
Microsoft
is
already
working
on
data
on
top
of
om
and
it's
a
very
exciting
development.
There.
B
I
guess
like
global
warming,
I
guess
in
world
peace
yeah.
What
like
alpine
is
a
problem
we
don't
solve
like.
How
do
you
write
code?
How
do
you
write
bug
free
code
that
we
cannot
solve
what
other
problems
we
cannot
solve?
B
B
Again,
it's
not
a
programming
one,
but
hopefully
our
vision
is.
It
will
become
the
standard
tobacco
standard
for
platform
builders
on
top
of
kubernetes,
because
we
can
see
that
kubernetes
has
basically
it's
become
a
difficult
standard
for
container
services
right.
But,
as
I,
as
I
mentioned,
I
spent
quite
some
time
on
this
slides
on
why
it
is
not
application-centric
platform
so
and
so.
B
There's
a
gap
from
developer
and
all
the
application
layer
to
cognitive
gap
we
hopefully
om,
will
be
that
will
fill
that
gap
and
be
a
defacto
standard
for
most
of
that
platforms
on
top
of
kubernetes,
at
least
and
and
in
the
future,
probably
will
be.
Hopefully
when
we
set
out
to
to
develop
the
om
standard,
it
will
be
universal,
but
we
will.
B
See
just
one
more
thing
is
like
we
already
noticed:
there
are
several
different
platform
application,
centric
platform
offerings
in
the
kubernetes
ecosystem,
but
we
think
that
eventually,
hopefully
we
already
see
that
many
of
them
lack
of
this
trait
and
and
the
registration
capabilities
which
greatly
hinder
their
effectiveness
to
provide
real
good
service
for
the
developers.
A
Okay,
well,
that's
all
the
time
we
have
for
today.
Thank
you
very
much
ryan
for
your
question
for
your
presentation.
I
should
say
the
presentation
will
be
available
later
today
on
the
cncf
website.
Thank
you
again
for
attending
and
have
a
wonderful
rest
of
your
day.