►
From YouTube: Introducing Korifi: The Evolution of CF on Kubernetes - Dr. Dave Walter & Andrew Wittrock, VMware
Description
Introducing Korifi: The Evolution of CF on Kubernetes - Dr. Dave Walter & Andrew Wittrock, VMware
The CF on K8s working group has decided to take a new approach to bringing the Cloud Foundry API to Kubernetes with Korifi. Korifi is a ground-up reimplementation of the Cloud Foundry API contract, backed by Kubernetes Custom Resources, Controllers, and Webhooks. In this session, we will dive into Korifi’s Custom Resources and how they simplify application deployment on Kubernetes - including a live demo of the platform in action. We’ll walk through how they fit together and how we leverage existing Kubernetes ecosystem projects such as Contour and Kpack to provide a fast, secure, and extensible platform.
A
Good
morning,
everyone
welcome
to
introducing
carifi
the
evolution
of
cloud
Foundry
on
kubernetes.
My
name
is
webdave.
You
may
as
well
Dave
Walter
beside
me
is
Bird
Rock
underway
truck
both
software
engineers
at
VMware.
So
you
may
be
wondering
what
is
karifi
well.
Karifi
is
a
Greek
word,
meaning
Peak
or
Summit.
A
A
A
A
This
in
turn
means
using
defined
interfaces,
which
has
the
added
benefit
of
providing
modularity
for
users.
For
example,
using
the
Gateway
API
will
allow
users
an
alternative
networking
solution,
such
as
istio,
instead
of
using
Contour,
where
a
suitable
interface
did
not
already
exist.
We've
created
our
own
as
you'll
see
later
the
app
workload
build
workload
and
task
workload,
crds
that
we've
created
allow
users
to
bring
their
own
build
and
run
Solutions
while
retaining
the
core
implementation
of
the
CF
API.
A
So
how
have
we
gone
implementing
these
I've
got
about
implementing
these
lofty
goals?
Unlike
CF
deployment?
Carifi
does
not
include
a
relational
database.
Instead,
it
uses
the
state
stored
in
custom
resources
on
the
cluster
to
keep
track
of
the
desired
configuration
and
the
actual
state
of
the
resources.
A
A
A
A
A
A
A
Curry
uses
pod
selector
labels
to
retrieve
staging
and
running
logs,
aggregating
them
into
a
single
source
for
the
CF
API
by
default.
This
is
best
effort
out
of
the
box.
Logging
sidecars
are
not
included,
but
if
a
platform
wishes
to
persist,
logs,
unlocking
Solutions
such
as
fluent
D,
Splunk
or
elastic,
can
be
deployed
alongside
kubernetes.
A
A
The
first
one
is
the
CF
app.
Let's
get
created
as
part
of
the
CF
push
workflow.
This
is
used
to
store
information
such
as
application's
name,
desired
state.
It
also
drives
the
creation
of
a
secret
to
store
user
provided
environment
variables,
the
status
of
the
Resource
Services
information
about
the
state
of
the
application,
such
as
whether
it's
finished
staging.
A
Next
in
the
CF
push
workflow
is
uploading:
The
Source
The
Source
package.
These
are
stored
in
the
image
registry
as
a
single
layer
image.
The
safe
package
resource
is
created
to
associate
this
image
with
the
application
and
to
provide
a
guide
for
the
CF
API.
A
A
There
we
go,
no
I
went
too
far.
The
CF
build
is
reconciled
into
a
build
workload
resource
which
is
the
first
of
our
interfaces.
Integration
points
where
users
can
bring
their
own
builder
instead
of
using
the
built-in
kpac
builder
in
career
fee.
This
includes
all
of
the
information
required
to
allow
an
application
to
be
staged
and
should
provide
feedback
on
the
build
and
the
resulting
droplet
virus
status.
A
Once
the
application's
finished
staging,
let's
see
if
process
resources
created
for
each
process
referenced
in
the
application
Manifest.
This
is
where
user
provided
information
for
things
like
the
desired
number
of
instances,
memory
and
disk
quotas
and
health
checks
are
combined
with
information
about
the
staged
application.
A
Hcf
process
resource
is
reconciled
into
an
app
workload
resource,
which
is
the
next
of
our
integration
touch
points.
This
allows
users
to
bring
their
own
runtime
instead
of
using
a
stateful
set,
which
is
the
default
built
into
query
free.
It
includes
all
of
the
information
required
to
an
app
allow,
an
application
to
be
run
and
should
provide
feedback
on
the
status
of
the
running
application.
A
A
They
include
the
common
the
command
the
user
specified
for
that
task,
which
is
then
executed
using
the
application
stage,
droplet
image
the
CF
task,
reconciles
into
a
task
workload,
which
is
the
last
integration
touch
point,
and
again
this
has
all
of
the
information
required
for
someone
to
implement
a
task
Runner.
Instead
of
using
the
default,
that's
built
into
query
fee.
A
For
routing
we
have
two
resources:
the
F
domain
and
CF
route,
the
CF
domain,
we've
barely
scratched
the
surface.
All
that
we
have
here
is
shared
domains,
so
the
safe
domain
is
a
very
simple
resource
that
just
has
the
name
of
the
the
domain
that
you
wish
to
to
set
up.
We
haven't
yet
implemented
logic
to
protect
from
my
overlapping
domains
and
host
the
name
squatting
and
that
sort
of
thing.
A
So
far
for
services,
we
only
support
user
provided
service
instances,
so
the
user
will
create
just
when
they
run
CF
cups.
That
will
create
a
CF
service
instance
that
will
include
the
name,
the
type
and
the
tags
of
the
of
the
instance.
A
The
credentials
will
go
into
a
secret,
that's
that's
referenced
in
this
resource
and
then,
similarly
for
the
service
binding
that
binds
the
instance
to
the
application
and
both
of
these
conform
to
the
kubernetes
service,
binding
spec
by
ref,
by
including
a
reference
to
the
secret
in
the
status,
so
that
way
we're
conformant
there
and
we
can
use
implementations
of
that
as
a
runtime.
A
Lastly,
we
have
the
dependencies
and
the
the
core
resources
that
carifi
relies
on
by
default.
It
would
use
kpac
for
staging
the
accumulator
service
binding
spec,
that
I
mentioned.
We
just
have
a
very
simple
stateful
set
Runner
for
runtime.
The
reason
we
chose
that
was
because
Cloud
Foundry
applications
expect
an
instance
index
and
stateful
sets,
give
us
that
predictable
number
for
each
instance
of
the
application
for
Ingress
we
integrated
with
Contour.
We
have
plans
to
use
Gateway
API,
but
that's
not
really
pretty
not
ready.
B
To
helm-
and
that
was
sort
of
generally
because
we
felt
the
experience
was
a
little
bit
better.
So
if
you
install
now
you'll
be
using
helm,
so
it's
a
standard
Helm
chart
provides
a
simple
interface
for
users
to
provide
all
the
configuration
options
of
note.
All
sensitive
values
are
provided
via
references
to
user
managed
secrets.
So
you
don't
have
to
worry
about
some
of
that
being
stored
in
strange
places
and
next
up,
I
hope
people
like
demos,
oops.
B
Let
me
go
back
so
they're
demos
are
always
entertaining
because
if
they
work
it's
great
and
if
they
don't
work
well
we'll
find
out.
Hopefully
this
can
be.
D
B
Actually
be
able
to
see
the
screen
too
well,
but
yeah
cool
cool
all
right.
So
if
I
can
look
at
this
cool
looks
good.
So
that's
the
demo
script
right,
cool
all
right,
so
first
we're
going
to
deploy
creepy
I've
already
set
up
all
the
dependencies
for
a
kind
cluster.
So
it's
a
local
cluster
with
local
registry
didn't
really
feel
like
dealing
with
too
many
network
issues
here.
So
this
should
deploy
creepy
on
the
right
pane.
B
We
will
eventually
see
some
resources
popping
up
as
we
push
the
various
bits,
but
right
now
it's
just
kind
of
saying:
I
haven't
found
anything
so
as
that
works.
Let
me
switch
down
here.
B
B
Okay?
So
it
looks
like
I,
don't
read
Dave
you
want
to
help
me:
okay,
cool!
So
if
we
have
a
creepy
deployed-
and
it
looks
like
we
do-
you
can
see
on
the
lower
pane
there's
some
various
resources
in
the
creepy
namespace.
Those
are
all
of
our
controllers
and
everything
so
I
should
be
able
to
log
in
and
of
note
right
here,
you'll
see
two
different
choices
for
us.
One
of
them
is
a
kubernetes
admin.
So
that's
like
literally
your
cluster
admin.
We
don't
really
want
that.
B
So
if
we
do
that,
we
can
then
see
I
believe
we're
targeted,
all
right
and
then
I
think
it
says
deploying
spring
music.
Is
that
what
it
says
Okay
so
we're
going
to
push
spring
music
and
we're
gonna
hope
for
the
best
here
because
of
network,
but
as
we
push
that
we
should
be
able
to
see
resources
popping
up
on
the
right
which
those
are
the
various
creepy-based
resources,
as
well
as
some
like
build
images
from
kpac
and
whatnot
and
as
that
pushes
we
can
probably
look
at
some
resources
down
here.
B
So
we've
got
our
pods
I,
think
yep.
So
we've
got
a
build
pod
going.
This
is
kpac,
so
it's
using
build
packs
to
build
the
image
you'll
see
we
have
an
app.
B
We
have
a
build
workload
and
that
should
have
some
various
information
on
it.
If
we
go
down,
there's
the
spec
I
believe
so
you
can
see
various
status
info
and
all
that.
B
And
it
looks
like
kpac
is
doing
its
thing,
I
believe,
might
take
a
few
minutes
because
I
don't
know
if
anyone's
looked
at
Java
apps
recently,
when
you
have
a
jdk
and
a
JRE
and
Maven
or
Gradle
and
everything
else,
it's
it's
a
little
bit
of
time,
but
so
far
so
good.
B
If
there
are
any
resources,
anyone
really
desires
to
look
at
right
now,
while
we
wait
for
this,
I
can
look
at
them.
I,
don't
know
if
put
your
hands
up
or
Dave.
What
do
you
think
this
might
take
a
few?
We.
A
B
B
You
know
at
least
there's
some
consistency
to
the
grooving.
We
can
see
in
the
streaming
logs
that
it's
still
going
through
its
build
process.
D
B
It's
probably
some
status
down
here,
yep
thousand,
no,
my
my
T-Max
skills
are
not
really
the
best,
so
Advanced
zooming
in
on
single
frames,
probably
not
going
to
be
part
of
the
demo
today,.
B
I'm
gonna
step
down
so
I
can
see
for
a
second
still
compiling
fun
times.
It's
it's
going,
we're
seeing
more
stuff
so
to
get
there.
Apologies
for
all
the
slowness
of
the
the
building.
B
2020.
yeah
yeah,
but.
D
B
I
haven't
noticed
it
to
be
super
slow,
but
most
of
it
is
usually
Network
for
me
because
you've
got
to
pull
down
the
various
like
for
here.
You
know
the
jdk,
the
JRE
any
of
the
other
build
tooling.
That's,
usually
the
bottleneck
for
in
my
observations
but
I,
don't
know
what
have
I
yeah.
A
B
B
B
I
assume
that's
possible
and
I
haven't.
Oh
sorry,
so
the
question
was:
could
you
load
the
the
jdk
and
the
JRE
off
the
file
system
and
I
assume
that
would
be
possible?
I
have
not
looked
into
it.
You.
D
B
E
B
B
B
Well,
we
can
keep
waiting
for
it.
I
can
retry
it.
But
theoretically
what
should
have
happened
is
this
application
would
come
up
and
then
I
would
have
been
able
to
show
you
actually
binding
a
service
to
it,
but
that
relies
on
the
application
actually
working.
B
I,
don't
know
which
one
that
is
yeah.
Let's
look
at
some
lines.
A
B
Yep
of
Christmas
okay.
So
let's
take
some
questions
in
the
the
minutes,
while
I
try
to
get
this
thing
working
again:
apologies
for
the
trying
to
do
a
live
demo,
but
any
questions
right
now
that
people
have.
B
D
E
D
A
B
If
you
can
get
around
not
using
any
indexes,
you
can
use
deployments
or
something
else,
but
we
decided
to
sort
of
stick
as
close
as
we
could
to
the
cloud.
Foundry
Concepts
right
now
to
make
sure
everything
works
as
people
expect
it
to
cool.
B
Right
now
we
don't
have
a
lot
on
the
roadmap,
for
that
I
would
assume
we're
going
to
try
to
support
anything.
That
is
well
most
of
the
things
that
are
in
the
cloud
founder
ecosystem
and
VMS.
B
We
have
pretty
limited
implementation
for
like
service
broker
type
stuff
right
now,
so,
hopefully
to
come,
the
national
anyone
would
like
to
contribute.
Then
please
help.
A
A
B
B
Like
an
air
gap
environment,
something
like
that
right
now,
obviously,
that's
not
something
that
we
support.
You
would
need
to
solve
those
problems
that
we've
seen
earlier
like
you
would
need
to
be
able
to
load
all
your
build
Tooling
in
from
a
local
machine
rather
than
reaching
out.
So
it
would
probably
require
a
bit
more
implementation,
especially
around
any
build
pack
stuff.
But
beyond
that,
I,
don't
think
it'd
be
too
difficult.
F
The
the
build
packs
fully
support
on
offline
deployment,
so
you
can
basically
rebuild
the
build
pack
images
with
all
of
the
dependencies
baked
into
the
image
itself.
So
then
the
actual
kpac
cluster
when
it
goes
to
actually
run
the
build
it
just
copies
things
like
the
jdk
or
the
JRE
directly
into
your
application
image,
rather
than
downloading
them
over
and
over
again
from
the
internet.
So
all
of
that
on
the
Capac
side,
fully
supported
offline
builds
completely
capable
you
just
have
to
include
the
correct
buildbacks
when
you
install
your
cluster.
A
B
So
I
think
we're
definitely
hitting
limits
of
network
stuff
because
the
previous
build
finally
completed
and
uploaded,
but
now
I'm
outside
of
the
scope
of
the
demo
script.
A
A
C
A
B
Just
any
other
questions
you.
B
B
The
next
one's
coming
in
so
I
gotta
know
healthy,
Upstream,
fun,
okay!
Well,
you
know
this
is
a
demonstration
of
what
can
absolutely
go
wrong
in
a
live
demo.
Don't
be
me
in
the
future
because
it
feels
great,
let
me
tell
you
yeah
so.
D
B
I
can
I
can
actually
record
like
screen,
Gap
and
demo
for
everyone
and
then
send
it
over
to
Chris,
and
we
can
get
it
uploaded
for
people,
because
when
we
have
a
new
one,
we
have
two
minutes.
This
is
perfect,
wait
and
then
so
what?
What
are
you
is
this
seriously?
Okay,.
A
B
B
Super
quickly,
I
can't
type.
B
And
then,
hopefully,
if
y'all
want
to,
you
can
stay
through
the
extra
few
minutes
wow.
This
is
some
Advanced
stuff.
B
D
B
E
F
B
B
C
A
A
B
This
album
I,
don't
know,
I
asked
for
an
album
from
a
teammate
and
they
said
do
this
one,
so
it
should
be
somewhere
in
here.
B
F
B
That
might
be
oh.
B
That
pain
and
show
you
that
little
logs
actually
work
as
well.
Hopefully,
I
can
I
can't
what
did
I.
B
We
go
so
logs,
work
yay
anyway.
Sorry
for
all
those
delays
and
things
you
know
live
demo
there
it
is,
it
does
connect
to
a
database.
You
can
give
a
user
provided
service.
You
can
bind
that
you
can
do
all
sorts
of
fun
stuff.
It
does
the
sort
of
things
you
expect
it
to
and
we'll
continue
developing
features
and
anyone
that
wants
to
come
help
us
please
do
yeah
always
looking
for
help.
Yeah
well,
and
my
laptop
really
loves
me
so
I
think
I
need
to
go
back
to
the.
A
Yeah,
the
only
other
thing
we
had
slide
wise
was
there's
just
a
road
map
like
I,
say
upcoming.
We
want
to
use
the
Gateway
API
once
it's
reached
the
level
of
operational
maturity
and
then
just
expanding
the
CF
API
endpoints
that
we've
implemented.
A
B
Yeah
there
weren't
too
many
slides
and
I
kind
of
screwed
it
up
by
switching
back
and
forth
between
stuff.
So
yes,
any
other
questions
we're
over
time
by
the
way.
But
let's.
A
C
Yeah
and
what
about
rolling
updates
at
all
link
updates
like
on
cloud
Foundry,.
A
B
Updates
so
like
you've
noticed
it's
not
quite
there
yet,
because
you
saw
the
no
healthy
Upstream,
so
we
we
still
need
to
implement
some
of
that
to
make
sure
that
we
have
follow-up
time.
A
But
I
mean
the
one
thing
here
was
this
resolved
one
instance
of
everything
so
when,
when
the
instance
goes
down,
obviously
everything's
bad,
but
if
you
scale
up,
you
can
scale
to
multiple
instances
of
the
API
pods
scale
up
the
the
controllers
with
leader
elections.
So
they
they
could
be
scaled
up
scale.
The
applications
up.
Obviously,
and
then
there
will
be
configuration
to
to
tell
kubernetes
to
roll
the
deployment
one
at
a
time
or
you
know,
pieces
at
a
time
to
avoid
everything
being
down
and
that
being
actual
downtime
during
the
upgrade.
B
Yeah,
that's
definitely
a
good
point,
but
for
the
the
single
one
since
example,
we
have
not
implemented
anything
special
to
get
around
that
yeah.