►
Description
As a community, we are all building up the know-how on how to run Kubernetes. At its heart, Kubernetes is an infrastructure platform for container orchestration. We need to do more for application developers building distributed microservices on Kubernetes. In this session, you will get a better understanding of the Open Application Model which is an open source specification that allows developers and application operators to describe their apps and it’s needs instead of dealing with the complexities of infrastructure. We will look at the problems this project is attempting to solve, how it approaches it and how it leverages and work with existing projects to solve the problems.
A
Rb1
who's
joining
us
today
welcome
to
today's
CN
CF,
our
bernard
on
Oh,
an
a
open
application
model
or
a
teen
centric
app
model
for
applicant
helpers
and
operators.
Perrin
chief
community
program
manager
at
Microsoft
and
native
and
masseter
I'll
be
moderating.
Today's
webinar
we'd
like
to
welcome
our
presenter
today
Mackenzie
Olsen
program
at
address
off
just
to
start
a
few
housekeeping
items,
or
we
start
during
the
webinar.
You
are
not
putting
people
to
talker
attendee.
There
is
a
Q&A
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
in
your
questions
there
and
we'll
get
to
as
many
as
we
can.
This
is
an
official
webinar
of
the
CCF
and
asset
and,
as
such
is
subject
to
the
CNCs
code
of
conduct.
Do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct,
particularly
please
be
respectful
of
all
your
fellow
participants
in
person
and
with
that
I
will
hand
it
over
to
Mackenzie
to
kick
off
today's
presentations.
Great.
B
Thank
you,
Karen.
Just
as
a
heads
up,
we
have
a
couple
of
moderators
from
the
open
application
model
on
the
chat.
So
if
you
ask
any
questions
during
the
presentation,
they'll
be
fielding
those
and
if
there
any
that
I
want
to
bring
up
I'll
speak
to
those
as
well
all
right.
So
as
we're
all
well
aware,
kubernetes
has
provided
a
really
useful
set
of
api's
to
help
us
orchestrate
our
container
primitives,
but
there
are
a
lot
of
resources
to
keep
track
of.
You
have
your
basic
objects
with
your
pods,
your
services.
B
You
can
separate
them
with
namespaces,
and
then,
on
top
of
that,
you
have
different
layers
of
abstraction
to
choose
from,
depending
on
your
requirements
between
deployments.
Daemon
sets,
stateful
sets
replica,
sets,
etc,
and
with
all
these
different
resources,
it
becomes
difficult
to
see
what
the
original
application
topology
and
runtime
requirements
were
for
us.
That
raises
the
question
you
know:
how
can
we
stitch
all
of
these
discrete
resources
into
an
easily
operable
apple
kitchen,
and
at
this
point
you
might
be
asking
yourself
hasn't
home
solved.
B
This
problem
for
us
home
is
a
fantastic
package
manager
specifically
for
kubernetes
resources.
It's
great
for
bundling
your
files
together,
but
it
doesn't
provide
any
guidance
on
how
we'd
want
to
model
the
application
itself.
It's
still
entirely
up
to
you
on
how
you
want
to
fit
all
of
those
individual
LEGO
pieces
together
into
your
application,
so
leading
up
to
the
open
application
model.
B
We
talked
to
a
lot
of
different
folks
and
asked
them
questions
about
how
they
do
their
develops
practices
specifically
with
relationship
to
their
orchestrators,
and
we
noticed
a
couple
interesting
themes
among
larger
companies
or
enterprises.
We
would
see
really
large
ratios
in
favour
of
application
developers
to
infrastructure
operators
or
application
operators,
which
makes
sense
because
the
way
clusters
work
today,
you
can
host
many
many
applications
inside
of
the
same
cluster.
B
B
So
a
couple
of
themes
of
unwanted
to
cell
problem,
lots
of
homemade
paths
and
fast
layers
were
built
on
top
of
the
orchestrator
to
try
and
separate
it
away
from
the
developers,
while
still
leaving
enough
room
for
the
developers
to
specify
what
the
runtime
requirements
for
their
workloads
would
be.
It
would
work
fairly
well,
but
there'd
be
a
lot
of
leaks
of
orchestration
level
concepts
up
to
the
developers.
Another
another
solution
that
we
saw
companies
coming
up
with
were
really
complex.
B
The
ICD
pipelines
meant
to
really
stop
the
developer
concerns
at
containerization,
but
it
was
not
enough
for
a
lot
of
development
teams
to
to
specify
the
dependencies
for
their
services
and
they
were
still
having
to
communicate
with
the
operations
teams,
not
to
mention.
If
something
went
wrong.
With
these
really
complex
the
ICD
pipelines,
it
becomes
very
costly
to
fix.
B
So
with
all
these
things,
these
concepts
in
mind,
we
came
up
with
the
open
application
model,
so
it's
an
open
source
specification
to
help
us
define
cloud
native
applications,
we're
really
trying
to
solve
how
distributed
applications
are
composed
and
then
transferred
to
those
who
are
responsible
for
operating
them.
We're
currently
in
a
pre-release
v1
alpha
one
stage,
but
we're
working
on
a
second
release
that
should
be
coming
out
here
soon.
So
there
are
three
main
guiding
principles
for
the
first
being.
B
Lastly,
there's
this
growing
growing
ask
for
the
application
to
be
brought
to
different
environments,
whether
that
be
cloud
an
edge
between
multi
cloud
hybrid
deployments,
so
we
wanted
to
write
a
model
that
would
help
folks
have
a
very
consistent
application
that
they
could
plug
across
all
these
different
environments.
So.
B
Who-Who
are
we
trying
to
cater
towards?
We
have
three
main
personas:
they
don't
necessarily
need
to
be
separated
people,
as
I
mentioned
earlier.
That
could
be
one
person
doing
it
all,
or
it
could
be
three
separate
roles,
the
first
and
foremost
being
the
application
developer
and
their
their
main
job
is
to
focus
on
delivering
code
in
a
platform
neutral
setting.
B
Then
there's
the
application
operator,
who
is
essentially
adding
runtime
characteristics
to
what
to
the
code
and
that
the
application
developer
is
producing.
So
this
could
be
things
like
Auto
scale.
It
could
be
applying
traffic
management,
identity
etc.
And,
lastly,
we
have
our
infrastructure
operators.
So
those
are
the
folks
who
are
configuring,
the
environments,
to
satisfy
any
unique
operating
requirements
for
the
application.
B
Components
purpose
is
to
encapsulate
application
code,
so
here
we're
defining
runtime
requirements
so
like
what
the
workload
type
would
look
like.
We've
also
provided
a
place
for
developers
to
specify
parameters
that
might
be
overwritten
by
application.
Operators
you'll
see
resource
requirements,
health
and
liveliness
probes
fairly
similar
requirements
to
what
you
would
see
in
a
pod
specification.
B
So
once
you
have
your
components,
we
want
to
add
additional
runtime
functionality
to
them
and
we
do
that
with
something
called
traits.
So
these
are
discretionary.
Runtime
overlays,
that
help
us
apply
this
operational
functionality.
So
we
have
our
component.
Let's
say
it's
a
web
server
and
we
want
to
allow
traffic
to
flow
into
it.
We
might
add
something
like
an
ingress
trait
to
that
component.
B
B
So
now
that
we
have
our
components,
you
might
be
collecting
a
lot
of
these
and
you
may
need
some
way
to
group
these
together.
So
that's
where
we
came
up
with
the
idea
of
application
scopes,
their
discretionary
application
boundaries,
where
you
can
group
components
based
on
their
behavior.
So
an
example
of
this
may
be.
You
have
a
set
of
components
that
you
want
to
have
live
in
the
same
subnet.
B
And
lastly,
we
have
an
application
configuration,
so
this
defines
the
application
deployment.
So
it's
kind
of
important
to
understand
the
distinction
between
application,
scopes
and
configurations.
Scopes
is
all
about
grouping
components
together,
based
on
a
common
characteristic
application
configuration
is
how
we're
going
to
take
these
three
concepts.
We've
learned
about
components
traits
and
scopes
and
instantiate
them
its
totally
reusable.
So
if
you
end
up
creating
an
application,
configuration
your
components,
go
live
the
traits
are
applied
and
they're
living
inside
the
scopes,
and
you
can
use
this
application
configuration
as
an.
B
So
putting
together
these
forming
constructs
components,
traits
scopes
and
application
configurations
with
the
personas
we
described
earlier,
we
have
the
application
developers
who
are
authoring
the
component
schematic.
So
this
is
essentially
a
container
pods
back
along
with
a
workload
type
which
I'll
dive
into
in
a
little
bit
and
any
parameters
that
they
might
need
to
have
over
in
by
the
application
operator.
So
they
write
those
application
operator
we'll
go
ahead
and
pick
the
necessary
components
that
they
want
to
deploy
IDE
necessary
traits
such
as
ingress.
Another
example,
perhaps
could
be
auto
scale.
B
They
could
apply
necessary
scopes,
so
a
network
scope
and
then
deployed
on
to
the
environment
that
has
been
configured
by
the
infrastructure
operator.
These
arrows
probably
don't
point
towards
temporal
ordering
most
likely.
You
have
the
environment
already
configured
before
you
go
ahead
and
execute
these
two
steps.
B
So
a
simple
example
of
what
an
application
might
look
like
here.
We
have
two
components:
a
singleton
server,
which
is
a
workload
type
and
then
a
database
component
and
from
here
I've
added
an
ingress
trait
saying:
I
want
traffic
to
flow
into
this
front-end
component
and
then
a
manual
of
scalar
trait
applied
to
the
second
component.
A
manual
scalar
trait
is
essentially
a
way
of
declaring
how
many
replicas
you
want
for
a
component
to
run
at
a
given
time.
B
So
that's
the
general
architecture.
What
does
the
yem
will?
Look
like
that
supports
these?
These
constructs,
so
here
we
have
an
example
of
a
component
schematic
for
both
of
those
pieces
I
described
earlier,
so
the
web
server
the
web
UI.
We
specified
the
workload
type
here,
which
is
a
singleton
server.
This
means
that
we
want
one
instance
of
this
running
at
maximum.
B
We
can
see
here.
There
is
a
database
connection
string,
the
developer
doesn't
have,
but
they
are
going
to
parameterize
it,
so
it
can
be
overridden
later
by
the
application
operator
and
they've
also
provided
the
container
necessary
container
information.
So
the
image
required
resources,
ports
and
that
environment
variable
that's
being
overwritten
as
a
parameter
for
the
back-end
piece.
We
have
a
MongoDB,
a
containerized
version.
There
are
no
parameters
for
this
that
they
are
including
they're,
just
specifying
the
resources
and
necessary
ports.
You
can
see
here.
This
is
of
a
slightly
different
workload
type.
B
This
is
a
server.
The
main
difference
between
a
single
team
server
in
a
server
is
that
we
would
expect
multiple
instances
of
this
to
be
running,
which
is
why
you
can
see
earlier
we've
added
a
manual
scalar
tree.
The
application
developer
doesn't
Fairley
care.
How
many,
how
many
vert
number,
how
many
replicas
will
be
running
of
this
specific
component?
They
just
want
to
surface
up
to
the
application
operator.
Okay,
I
would
like
more
than
one
instance
of
this
to
be
running.
B
So
diving
a
little
bit
more
into
workload
types.
So
this
is
a
concept,
that's
specific,
to
components.
Right
now
we
have
six
supported
workload,
types
in
ohm
server,
singleton
server,
worker
singleton
worker
tasks,
singleton
tasks,
and
it's
sharing
a
specific
specific
pieces
of
information
about
how
that
component
should
be
run
so
for
servers.
We
expect
there
to
be
a
service
endpoint
and
we
expect
it
to
be
demonized
and
the
main
difference
between
those
first
two
rows
again
is
the
number
of
replicas
running.
B
That
same
pattern
holds
true
for
workers
as
well
as
tasks
difference
going
down.
The
list
is
that
there
are
no
end
points
for
the
workers
or
the
tasks
and
workers
will
essentially
restart,
whereas
tasks
will
not
restart
once
they
have
completed
whatever
they're
trying
to
execute.
All
of
these
are
containerized
as
well,
so
these
these
will
all
be
represented
with
that
set
of
parameters
asking
for
image,
resources,
etc.
B
So
that's
that's,
essentially,
all
the
information
that
needs
to
be
provided
from
the
application
developer
about
what
and
how
the
component
will
run
now
as
for
traits
and
scopes
so
for
components.
We
have
to
fill
out
that
component
schematic
where
the
developer
is
surfacing
up,
necessary
information
for
core
traits
and
scopes.
We
don't
necessarily
need
to
fill
out
that
we
do
not
fill
out
that
same
same
schematic.
Instead,
these
are
defined
on
the
own
specification,
because
these
are
essentially
core
to
the
open
application
model.
B
So,
in
order
to
know
what
parameters
we
need
to
fill
out,
you
visit
the
ohm
specification
and
you
can
see
what
what
traits
and
scopes
are
supported.
Here.
We
have
a
manual
scalar
trait
and
it's
fairly
simple.
You
only
need
one
parameter,
which
is
replica
count
you
can
see
here.
It
doesn't
apply
to
all
of
the
various
workload
types,
those
being
specific
types
of
components
which
makes
sense.
B
You
wouldn't
want
to
add
a
manual
scalar
trait
to
a
singleton
server,
because
by
definition,
you
should
only
have
one
instance
of
singleton
server
running
at
any
time,
so
we're
setting
up
some
guardrails
to
set
to
set
our
relationship
between
our
application
developer
an
application
operator
up
for
success.
The
same
same
concept
holds
true
for
application
scopes.
Here's
an
example
of
our
network
scope:
you
can
see
their
current
information
being
a
network
ID,
a
subnet
ID
and
then
an
Internet
gateway
type.
We
list
out
the
complete
set
of
required,
complete
set
of
parameters.
B
So
going
back
to
our
sample
application,
we
had
our
web
front-end,
our
database
back-end,
so
here
I've,
just
modeled
them
in
the
same
file,
because
they're
yanil
and
the
application
developer
has
provided
this
information.
The
application
operator
is
going
to
go
ahead
and
pick
out
its
nessus,
so
we
want
to
pull
out
that
web
UI
and
we're
going
to
pull
out
that
MongoDB.
B
The
parameters
have
already
been
filled
in
there
and
we're
just
applying
the
necessary
traits
and
scopes.
So
for
that
MongoDB
we
give
it
an
instance
name
we've
also,
given
it
a
manual
scalar
treat
and
we're
going
to
set
the
replica
count
to
3.
We
want
it
living
inside
the
same
Network
scope,
which
we
have
instantiated
above
for
web
UI
we've,
given
we
filled
out
that
parameter
there,
so
our
DB
secret
that
we
had
left
just
as
a
parameter.
B
So
rudder
is
our
kubernetes
reference
implementation.
It
is
a
open
source
project.
It
works
on
any
kubernetes
cluster,
so
this
could
be
a
managed
kubernetes
such
as
eks
or
aks,
or
just
an
entire
full-fledged
kubernetes
cluster.
You
have
running
on
your
fans
that
you
manage
all
up,
so
rudder
supports
all
core
own
constructs,
so
those
core
workload
types
I
was
talking
about
earlier
all
should
work
within
rudder
and
the
high-level
flow
here.
B
So
the
application
developer
is
specifying
its
components,
application
operators
pulling
them
together
inside
of
an
application
configuration
by
applying
the
necessary
trades
and
scopes
bringing
back
home.
You
can
pull
all
these
Hamill
files
together
in
a
helmet
art
and
deploy
the
way
you
interface
with
rudder
is
through
Q
control.
So
you
can
use
the
same
tools
that
you're
familiar
with
today.
B
If
you
are
an
infrastructure
or
application
operator,
who's
used
to
working
with
kubernetes
the
Q
control
and
all
of
the
kubernetes
friendly
tools
will
still
be
at
your
disposal,
and
then
you
have
this
great
layer
of
abstraction
that
you
can
surface
up
to
your
application
developers
and
infrastructure
operators
or
sorry,
not
infrastructure
operators.
Just
your
application
developers,
so
how
does
the
implementation
architecture
work
so
we
have
the
application
model
itself,
which
is
the
interface
to
the
application
developers
where
they're
specifying
necessary
traits
and
scopes
from
there.
B
The
implementation
translates,
so
you
have
a
out
of
scale
trait.
The
implementation
goes
and
decides
how
it
will
be,
how
it's
going
to
be
executed
upon
and
then
the
infrastructure
is
essentially
determines
of
what
platform
features
are
available,
how
it'll
be
how
will
be
executed
on
the
hardware
and
the
orchestrator
and
whatnot?
So
here
the
orchestrator
would
be
kubernetes
platform
features
or
all
the
features
that
would
be
supported
in
kubernetes.
Today,.
A
Hey
I
have
a
question,
or
someone
submitted
two
questions,
so
they
said
we
use
the
works,
flux
and
have
written
a
lot
of
our
own
operators
and
see
our
DS.
Is
there
a
way
to
adopt
ohm
as
a
standard
way
of
defining
operational
and
security,
semantics
for
our
components
and
workloads
without
adopting
something
like
rudder.
B
You
could
another
way
you
can
go
about
doing.
This
is
using
rudder
for
the
core
treats,
but
if
you
have
existing
operators
and
CR
DS
we're
working
on
trying
to
make
actually
a
good
segue
the
model
more
extensible
to
existing
resources
that
you
have
today,
so,
instead
of
working
only
within
that
set
of
core
components,
tradeking
scopes,
we'd
hope
that
you
could
model
those
existing
CRTs,
somehow
with
extension
points
which
lends
itself
really
nicely
to
this
next
cut.
This
next
talking
point
almond
extensibility.
B
So
you
heard
me
talking
a
lot
about
core
core
constructs
earlier
on.
These
must
be
supported
by
own
compliant
implementations,
and
then
there
are
extended
workload,
types
or
extensions
traits
and
scopes.
These
are
optionally,
supported
by
own
compliant
implementations.
So
this
could
be
an
example
of
where
you
have
your
existing
C
IDs.
You
want
to
plug
it
in.
B
How
can
we
set
up
the
application
model
to
consume
these
in
a
way
that
that's
that's,
pluggable
and
you
can
still
model
it
with
the
open
application
model
using
those
core
using
these
same
set
of
VMO,
with
components
traits
and
scopes
so
yeah?
These
are
our
set
core
constructs.
So,
as
I
mentioned
earlier,
you
have
those
six
four
component
workload
types.
You
have
two
different
scope
types
at
this
point:
we
have
our
network
scope
and
our
health
scope
and
we
have
one
existing
traped
core
trait
type,
which
is
manual
scalar.
B
A
good
example
of
an
extended
trait
might
be
an
auto
scaler.
So
at
first
glance
you
might
think
this
seems
fairly
basic
and
you
would
want
to
include
this
in
your
set
of
core
trait
types,
but
to
try
and
model
all
the
different
ways.
People
might
want
to
auto
scale
is
very
difficult
and
we
can't
capture
it's
difficult
to
capture
it
all
within
one
long
list
of
parameters.
B
So
here's
an
example
of
a
mix
of
core
core
constructs
and
extension
constructs.
So
we
have
our
component,
which
has
a
work
load
type
of
server
and
we're
having
traffic
flow
in
via
our
ingress
trait,
so
I
described
an
ingress
to
Trey
a
little
bit
earlier.
It's
actually
not
a
core
trait,
but
it's
an
extension
trait
is
implemented
in
rudder.
But
that
means,
if
you
want
to
go
and
use
let's
say:
alibaba's
II
does
implementation.
B
B
So
these
are
two
examples
of
extension
points
that
so
these
resources
aren't
necessarily
implemented
by
all
the
different
implementations,
but
they're
still
useful
and
I
want
my
developers
to
be
describing
these
or
my
application
operates
to
be
describing
these
in
the
same
manner
that
they're
describing
all
the
different
aspects
of
their
application
with
those
component
schematics
and
putting
them
together
in
an
application.
Configuration.
B
So
up
and
coming
as
I
kind
of
mentioned
earlier,
ohm
is
not
well
positioned
for
infrastructure
operators
who
are
interested
in
implementing
extended
workload,
types
or
extension,
trades
and
scopes
so
in
our
second
draft
were
focusing
on
making
all
more
flexible
for
infrastructure
operators
by
including
existing
resources
into
the
old
runtime.
We
we
want
to
make
this
easier
for
folks
and
find
a
way
to
also
make
sure
that
we
can
share
share
these
extended
workload
types.
B
A
A
B
There
is
no
source
to
image
construct
right
now,
but
we're
looking
ways
of
incorporating
technologies
like
K
native
that
have
build,
but
I
guess
it's
now
known
as
Tecton
pipelines,
ways
of
describing
those
with
ohm
so
that
you
could
leverage
existing
solutions
that
would
help
you
take
source
to
image
there.
But
now
right
now,
it's
not
exactly
it's,
not
a
first-class
citizen
in
the
open
application
model.
A
B
All
right,
lastly,
community,
so
if
you're
interested
in
getting
involved,
there's
a
couple
of
different
ways,
we
have
a
getter
channel
which
is
linked
above
here
we
have
bi-weekly
community
calls
the
next
one
is
coming
up
on
2:25
10:30
a.m.
PST,
where
we're
talking
a
lot
about
these.
These
upcoming
that
I
mentioned
earlier.
B
We
have
a
community
calendar
that
you
can
subscribe
to.
We
will
be
at
cube
Con
at
the
end
of
March,
beginning
of
April.
The
talks
are
not
on
there
yet,
but
we
have
a
couple
so
stay
tuned.
If
you
want
to
hear
more
from
us
and,
lastly,
contributions
to
our
tree
buzz,
we
have
the
specification
repo
itself
as
well
as
rudder,
which
is
our
open-source
kubernetes
implementation
and
lastly,
I
didn't
link
it
here,
but
we
have
a
samples
repo.
B
So
if
you
are
enjoying
rudder
or
have
tried
it
out
and
have
a
cool
application
that
you've
modeled
with
us
we'd
love
to
have
more
contributions
there,
so
we
can
have
different
reference
architectures
to
point
to,
so
those
are
all
great
ways
to
get
involved.
If
you
are
interested
and
with
that
I'll
open
it
up
to
any
additional
questions,
anyone
might
have.
B
Yeah,
so
the
next
release,
again
we're
gonna,
be
focusing
on
making
a
more
flexible
for
infrastructure
operators
allowing
them
to
include
existing
resources,
so
we're
trying
to
make
that
extension,
extended
workload,
type
extension
transcripts
story
stronger.
If
you
want
more
details
beyond
that,
I'd
really
suggest
coming
to
our
community
meeting
on
this
point,
we'll
be
talking
to
that
and
some
more
specifics.