►
Description
Knative promises to “abstract away the complex details (of Kubernetes) and enable developers to focus on what matters”. While Kubernetes has redefined the base level infrastructure expectations at many organizations, developers often struggle to understand and leverage the features available to them. Mark Wang and Evan Anderson will explain how S&P has used Knative to build a serverless developer experience. Along the way, you’ll learn about how Knative leverages Kubernetes to build a serverless platform on cloud-independent infrastructure.
Presenters:
Evan Anderson, Software Engineer @VMware
Mark Wang, Head of Cloud Engineering @S&P Global Ratings
A
Okay,
okay,
so
let's
can
you
hear
me?
I
got
some
noise
here,
okay,
cool
all
right,
so
we're
gonna
get
started
now
and
I'd
like
to
thank
everyone.
Who's
joining
us
today.
So
welcome
to
today's
cnc
webinar
snp
experience
report
the
multi-cloud
summaries
on
k
native,
so
my
name
is
danielle,
I'm
working
for
redhead
as
a
technical
marketing
major,
but
also
I'm
responsible,
csp
investor.
A
So
luckily
I
will
be
modeling
today's
webinar,
so
we
like
to
welcome
our
awesome
presenter,
evan
anderson
software
engineer
and
vmware,
and
mark
wong,
a
head
of
cloud
engineering
and
snp
global
ratings.
There
are
a
few
a
couple
of
things
of
housekeeping
item
before
we
get
started
so
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee
so
but
there
is
a
queen,
a
box
at
the
bottom
of
your
screen,
so
please
feel
free
to
drop
your
question
in
there
and
we
will
get
to
as
many
as
we
can.
At
the
end.
A
A
B
Hi
so
as
I
was
introduced,
I'm
evan
anderson
from
vmware
and
I'm
one
of
the
toc
members
on
the
k-native
project
and
I've
been
working
for
the
last
couple
years
and
we
recently
had
an
opportunity
to
work
with
mark
at
at
s,
p,
global
on
actually
rolling
out
an
implementation
of
k-native.
And
so
I'm
going
to.
Let
mark
introduce
himself
a
little
bit.
C
Thank
you,
daniel
and
evan
good
afternoon,
good
evening,
everyone.
So
my
name
is
mark
wong.
I
have
the
privilege
to
run
cloud
engineering
at
s,
p,
global
ratings,
so
like
evan
introduced,
so
we
have
a
current
existing
relationship
with
vmware
and
once
we
selected
the
technology
of
k-native
for
our
multi-cloud
serverless
implementation
and
we
started
working
with
evan
and
the
team.
It's
been
a
great
collaboration
and
you
know
our
goal
today
is
to
just
share
our
experience
on
this
journey,
and
you
know
the
talk
to
me.
B
Oh,
I
was
just
going
to
say
that
you
know
when
we
started
this
a
couple
years
ago.
It
was
you
know,
kind
of
a
dream
that
hey
all
the
pieces
were
coming
into
place
that
you
could
actually
run
a
serverless
solution
on
your
own
infrastructure
without
having
to
build
all
of
the
pieces
yourself.
B
So
you
didn't
need
to
build
your
own
scheduler
kubernetes
had
one
of
those
you
didn't
need
to
build
your
own
http
routing
envoy
had
that,
and
so
it's
exciting
to
see
companies
like
s
p,
actually
taking
that
last
step
of
not
just
having
the
software
but
actually
having
a
platform
that
they
can
build
and
customize
to
their
own
needs,
and
it's
been
great
working
with
them
to
be
able
to
get
that
to
happen,
and
with
that
I'm
going
to.
Let
mark
talk
a
little
bit
about
how
things.
B
Actually
you
know
how
s
p
actually
decided
to
do
a
serverless
platform.
C
Right
so
when
you
think
of
serverless
technology
right,
some
of
the
basic
things
come
to
mind.
One
is
you
don't
have
to
manage
infrastructure
and
the
other
is
the
speed
to
market
right
and
cost
is
another.
You
know,
because
of
compute
density
right,
so
you're
not
using
machines,
you're
using
everything,
packed
into
containers
and
even
functions
right.
So
why
do
we
go
to
a
function
as
a
service?
To
me,
the
biggest
benefit
is
we?
Can
ship
software
faster
right?
So
I'll
go
through
more
of
our
background
in
terms
of
our
technology
portfolio?
C
But
if
we
look
at
the
overall
goal
right,
so
why
do
we
want
to
adopt
fast
function
as
a
serverless
function
as
a
service
or
serverless
technology
right
it's
to
ship,
our
products
faster,
so
we
want
to
move
from
monthly,
quarterly
releases
to
weekly
or
daily
releases
right
so
release
frequently.
So
that's
one
of
the
benefits
from
serverless
technology,
which
is
that
teams,
scrum
teams
can
focus
on
the
business
logic
and
not
worry
about
infrastructure
worry
about
coding.
Standards
worry
about
vulnerabilities.
C
A
lot
of
those
will
be
taken.
Care
of
I'll
show
a
little
bit
more
about
that,
and
then
the
other
thing
is
that
they
can
own
more
of
their
staff
right,
so
they
don't
have
to
depend
on
this
team
or
that
team.
So
the
automation
and
the
pipeline
would
take
care
of
a
lot
of
that.
So
they
have
true
cicd
experience
right.
C
Then
we
end
up
retiring
our
legacy,
monolithic
applications,
so
less
technology
there
to
worry
about,
and
one
of
the
things
in
our
journey
that
we
found
is
that
as
we
moved
along,
you
know,
k-native
is
a
new
technology.
Even
kubernetes
is
a
new
technology
for
us.
So
for
us
to
have
this
global
fads
initiative,
we
really
needed
to
transform
our
culture.
C
So
one
of
the
things
we
notice
as
we
moved
along
is
that
we
partner
with
application
developers
because
the
subject
matter,
expertise
for
python4.net
for
angular
and
for
other
technologies
resides
with
application
teams
right.
So
these
we
have
a
open
kind
of
development,
open
collaboration
model
where
the
engineers
can
contribute
to
the
platform.
I'll
show
you
more
detail
about
that
and
then
when
they
contribute
they
kind
of
unblock
themselves
as
well.
It's
almost
like
a
kind
of
win-win
situation.
C
So
you
can
go
to
the
next
slide.
Okay.
So
this
is
a
background.
So
2020
one
of
our
five
strategic
initiatives
is
to
be
certified
on
as
a
service.
So
if
you
look
at
our
portfolio
two
years
ago,
where
majority
is
on
prem
and
then
last
year,
over
a
period
of
nine
months
or
eight
months,
we
migrated
100
percent
to
the
cloud
that
migration
was
kind
of
a
learning
experience
and
the
culture
change
in
itself.
C
If
you
think
about
our
company
right
smp
is
a
financial
trusted
financial
institution
in
fine
in
in
the
world
right
globally,
so
we've
been
around
for
160
years
and
today
we're
moving
at
the
speed
of
fintech.
So
last
year,
2019
by
mid
2019
over
a
period
of
a
month.
We
have
migrated
100
to
the
cloud
and
today
we're
benefiting
from
that
right
with
covet
with
remote
working.
We
enjoy
stability
and
freedom
in
the
cloud
right.
So
what
is
our
next
leap?
C
So
the
team
gained
a
lot
of
confidence
from
this
experience
to
move
to
the
cloud
and
just
a
little
bit
more
background
on
that
right.
So,
while
we're
doing
the
move,
migration,
mass
migration
to
the
cloud,
we
went
through
insourcing
as
well
as
agile
transformation.
So
there's
a
lot
going
on
culturally,
but
with
clear
leadership
and
dedication
of
our
teams,
as
well
as
ingenuity
from
our
engineers.
So
a
lot
of
people
don't
know
the
cloud
at
that
time.
Right
when
we
started
the
journey
on
the
project,
there
was
just
one
and
a
half
cloud
engineers.
C
I
count
myself
as
half
the
engineer,
so
we
slowly
built
up
the
team
internally.
We
didn't
really
get
much
help
from
outside
other
than
we
partnered
with
vmware
to
use
a
vmc
solution
for
some
of
the
vm
workloads.
So
we
took
so
after
that
experience.
We
gained
a
lot
of
confidence
and
then
going
to
2020.
We
said,
let's
take
a
leap
and
go
all
the
way
to
function
as
a
service.
So
if
we
go
to
the
next
slide,
I'll
show
you
a
little
bit
of
that
journey
in
terms
of
the
road
map
right.
C
So
we
did
a
market
research
on
what
are
our
options
to
go
to
function?
This
service
lambda
is
the
first
one
comes
to
mind
right
and
then
azure
functions,
so
we
evaluated
those
as
well
as
other
open
source
solutions.
Besides
k-nid
and
then
we
came
up,
came
to
the
conclusion
that
k-native
really
gives
us
the
cloud
independence
and
the
multi-cloud
capability
that
we
want
and
there's
a
lot
of
smart
engineers
like
evan,
that
you
know
back
up
the
active
community
behind
k
native
so
towards
the
beginning
of
this
year.
We
started
experimenting,
k
natives.
C
So
if
you
look
at
the
first
swim
lane,
so
we
experimented
with
k
native
and
then
by
end
of
q1.
In
partnership
with
evan
and
team,
we
were
able
to
release
our
first
version
of
function
as
a
service
platform,
and
then
we
were
able
to
release
a
handful
of
applications
to
the
cloud
to
to
production
and
then
the
other.
The
third
swim
lane
is
really
around
adoption
right.
So,
regarding
that
benefit,
we
talked
about
earlier
that
goal
of
shift
shipping
product
faster
right
so
to
for
us
to
be
able
to
ship
product
faster.
C
C
It's
really
to
make
sure
teams
know
how
to
do
function
as
a
service
using
k-native,
and
then
they
understand
how
to
scale
up
and
scale
down.
So
we
have
certification
meetings,
we
make
sure
the
teams
are
certified
and
then
they
understand
what
is
their
roadmap?
How
can
they
break
capabilities
out
of
the
monolith
to
put
into
serverless?
C
So
if
you
think
about
it,
silver
really
is
to
break
out
more
high
value
capabilities
out
of
the
monolith
and
put
it
on
to
k
native
right
and
then,
when
we
get
to
gold,
we
will
have
a
consolidated
portfolio
of
capabilities
and
we
will
also
retire
a
bunch
of
duplications
and
monolithic
applications.
At
that
time,
when
we
reach
gold
will
be
completely
on
function
of
the
service
using
k
native
and
let's
go
on
to
the
next
slide.
C
So
this
is
kind
of
a
release
view
of
the
platform
itself,
so
we
majority
of
the
responsibility
of
my
team
is
to
enable
the
application
teams
and
strum
teams
to
be
able
to
onboard
their
use
cases.
Like
I
talked
about
earlier.
We
really
opened
it
up.
We
had
nine
work
streams,
engineers
from
different
areas
or
were
helping,
collaborating
and
moving
this
thing
along
so
every
month
we'll
have
a
platform
release,
so
mbp
one
was
that
we
have
a
base
platform
right,
so
we
have
kubernetes.
C
We
have
k
native,
we
have
istio,
we
have
azure
devops
for
ci
cd
and
then
we
have
a
set
of
pilot
applications,
so
that
was
able
to
go.
Live
and
then
we
also
defined
this
model
to
collaborate
with
engineers
from
different
groups
and
then
fast
forward
to
2.0,
so
2.0
was
released
in
q2.
At
that
time
we
had
24x7
support
and
then
we
have
introduced
oidc
as
a
standard
for
authentication.
C
We
have
logging
monitoring,
onboarding
automation
as
well
I'll
show
you
some
of
that
in
the
demo,
and
we
have
ci
cd
pipeline
automation.
So
at
this
time
we
open
it
up
by
2.0
q2.
We
open
it
up
for
mass
adoption
and
we
introduce
the
reference
implementations
I'll
show
you
a
little
bit
of
that
in
the
demo
as
well
and
in
q3
we
are
going
through
more
maturity.
C
So
now
we
have
k
native
function
as
a
service
running
in
dmz,
so
we
have
external
phasing
applications,
that's
using
this
capability
now
and
we
have
distributed
tracing
and
we're
using
spot
instance
for
our
all
of
our
clusters,
and
we
also
introduce
containers
right
container
as
a
service,
because
not
everything
can
go
to
k-native
right.
We
have
commercial
off
the
shelf
products
that
we
don't
manage
the
code
for,
and
we
also
introduce
container
security
for
us
to
go
to
dmz.
We
want
to
make
sure
our
images
our
runtime
is
secure.
C
All
right
so
some
highlights
on
the
features,
so
we
support
currently
these
languages
for
our
developers,
java.net,
python
and
angular,
and
then
for
eventing.
We
support
kafka
as
well
as
activemq
and
for
security.
We
have
twist
lock.
We
have
open
policy
agent
and
fortify
is
for
our
static
code
analysis
and
then
on
observability
side.
We
have
logging.
C
So
one
of
the
things
is
the
developers
for
productivity
right.
They
don't
have
to
worry
about
logging
into
anything
right,
there's,
obviously
no
machines,
they
don't
have
to
care
about
where
the
container
runs.
They
have
full
visibility
with
logging
and
monitoring
and
we
added
distributed
tracing
as
well
with
the
open,
telemetry
and,
like
I
said,
we're
using
spot
instances.
I
think
one
thing
I
want
to
highlight
here,
maybe
in
a
later
slide
as
we
do
blue-green
cluster
automation,
so
we'll
go
to
the
next
slide
and
talk
about
that
yeah.
C
So
some
of
the
major
components
right
is
we're
using
eks
currently
in
amazon,
and
this
technology
is
completely
portable
on
other
cloud
providers
as
well
right,
we're
currently
using
alibaba
and
as
well
as
we're
going
to
start
using
azure.
So
because
it's
built
on
kubernetes
and
because
k
native
is
built
on
kubernetes,
all
these
will
be
portable
as
well
and
we're
using
a
lot
of
automation.
Right,
like
I
talked
about
blue
green,
so
every
month,
when
we
release,
we
will
build
a
whole
new
color
cluster
right
from
dev
all
the
way
onward.
C
The
reason
why
we
do
that
is,
we
want
to
make
sure
everything
is
automated,
and
we
want
to
also
make
sure
we
respond
to
the
changes
in
the
base.
Bay,
underlying
k,
native
and
istio
features
very
quickly
right,
because
the
technology
is
changing
very
quickly.
We
want
to
be
up
up
to
date
with
the
underlying
technology
and
yeah.
We
can
go
to
the
next
slide.
C
Yeah,
so
this
is
a
kind
of
a
quick
view
for
our
developer
experience.
So
we
have
local
environment
capabilities.
So
if
developer
wants
to
set
up
local
developer
experience
with
building
the
image
locally
and
running
the
image
locally,
they
can
do
that
and
we
also
have
pipeline
with
standards
built
built
in
and
automation
built
in
as
well.
So
once
they
go
through
the
pipeline,
let's
say
they
deploy
the
code
to
dev
then
immediately.
C
That
is,
the
image
is
shipped
to
our
artifactory
image
repository
and
then
it's
put
into
kubernetes
through
k,
native
and
accesses
through
istio.
So
why
don't
we
jump
to
the
demo?
So
I'll?
Give
you
a
little
look
at
this?
C
C
So
we
have
a
getting
started
guide
that
scrum
teams
can
come
in
and
use
it
to
get
onboarded
help
themselves
with
ci
cd
secret
manager,
and
then
we
set
up
a
meeting
with
them
in
30
minutes
they
can
get
going
and
we
cut
down
our
onboarding
time
from
when
we
started
the
mvp
one.
It
took
like
in
a
week
for
a
team
to
onboard
a
function.
So
now
it
takes
about
two
hours.
C
C
No,
no
I'll
make
it
quick,
it's
gonna
be
much
less
than
two
hours
so
starts
with
documentation
and
the
onboarding
automation
I
talked
about
so
in
jenkins,
right
so
based
on
the
different
types
of
reference
implementation.
C
So,
for
example,
if
you're
a
java
application
we'll
create
this
template,
so
we'll
fill
in
all
the
details
that
you
need
and
then
once
you
run
it
we're
going
to
build
the
reference
implementation
so
think
about
the
reference
implementation
as
the
best
practices
and
the
standards
built
in
so
once
we
run
through
this
pipeline,
it's
going
to
create
that
reference
implementation
into
a
template
for
you,
and
I
executed
this
form
for
our
demo
here.
So,
for
example,
I
have
a
demo
repository
called
demo
mark
rest.
C
So
it's
a
restful
function
and
it's
a
java
function
right.
So
we
have
code
reviewers.
We
have
other
team
members
and
we
have
secret
manager
defined
here.
So
after
we
run
this,
what
it
will
do
is
it's
going
to
create
a
reference
implementation
here.
So
if
you
look
at
this,
this
comes
sort
of
out
of
the
box
right.
So
you
have
a
reference
implementation.
C
You
know
all
the
monitoring,
the
log
and
the
documentation
comes
out
of
the
box
as
well.
So
all
you
have
to
do,
then,
is
to
cut
and
paste
your
code
into
it.
I'll
go
through
a
couple
of
components
of
this
right.
So
since
this
is
a
java
function,
we
have
a
pom
file,
so
palm
file
defines
your
dependencies
for
java
right
and
then
we
have
a
docker
file.
C
So
this
pipeline
file
is
important
because
we
use
ado
azure
devops,
and
this
defines
what
goes
into
the
pipeline
so
for
this
pipeline,
for
example,
I
turned
off
the
fortify
right
because
we
don't
want
to
run
this
for
like
10
minutes,
because
fortify
is
going
to
scan
the
code,
and
I
turned
off
couple
other
things
in
here
as
well.
For
example,
you
can
all
these
things
are
configurable,
so
you
can
turn
these
on
and
off.
C
So,
for
example,
I
turned
off
some
of
the
qa
automation,
so
you
can
have
ci
cd
and
ct
built
into
this
right
and,
as
part
of
certification,
would
make
sure
that
you
have
code
coverage.
You
have
continuous
testing
as
well
as
well.
As
you
know,
sanity
test
smoke
test
regression
testing
right.
So
these
are
all
configurable.
You
can
turn
on
and
off
for
this
demo.
I
turned
on
the
observability,
so
I
want
to
kind
of
show
you
that
and
the
other
important
file
is
service
yaml.
C
So
this
is
the
k
native
file,
so
in
here
developers
have
full
control
over
what
is
their
minimal
number
of
instances.
They
want
to
run
maximal
number
as
part
of
the
certification.
The
teams
have
to
demonstrate
that
they
understand
all
these
underlying
technology,
so
each
team
is
required
to
test
their
performance,
so
one
team
came
in
to
the
certification
and
they
load
tested.
They
performance
tested
their
current
usage
up
to
150
150
times
the
current
usage.
C
So,
for
example,
they
had
like
20
concurrent
users,
so
they
tested
up
to
300
or
3000
users
right
so
their
pot
scaled
up
to
10
and
then,
when
they
realized
that
they
they
really
don't
need
10
paws
maximum.
So
they
set
it
back
to
four
or
two
or
three
parts
to
support
their
typical
load
of
20,
but
they
have
full
control
over
this
and
then
the
other
thing
I
want
to
show
is
that
going
back
to
the
pipeline,
so
they
have
full
control
over.
Here
I
just
stopped
at
qa.
C
They
have
control
over
deployment
all
the
way
to
production,
vr,
dmz
and
so
on,
and
for
the
demo
purposes,
I'm
just
going
to
make
oh
yeah.
So
these
are
the
application
config
files
that
we
can
manage
for
demo
purposes.
I
have
this
sample
function,
is
a
ping-pong
method
and
then
there's
also
a
method
to
pull
out
some
sample
ratings.
C
Let's
see,
I
will
change
the
code
here
to
let's
say
cmcf
ping
pong
imagine
this
is
what
the
developers
would
go
through
to
change
the
code
or
even
put
the
code
in
the
first
time
and
as
part
of
the
the
pipeline
we
it's
gated
release
right.
We
want
to
make
sure
that
your
release
ties
to
a
story
right,
so
we
are
continuously
delivering
value
right.
So
then
they
will
commit
the
code
and
then
they'll
do
a
pull
request.
So
even
within
the
pull
request,
so
I
will
check
into
the
october
release
in
here.
C
We
are
sure
that
you
must
have
code
reviewers
right,
so
this
is
another
gate
that
you
must
go
through
and
I'm
just
gonna.
You
know,
for
I
don't
want
evan
or
no,
I
haven't
can
cancel
it.
I
don't
need
anybody
else
to
approve
the
code,
so
I'm
gonna
approve
it
myself
and
then
we'll
set
the
automation
to
build
the
pipeline
here.
C
So
that's
going
to
kick
off
the
pipelines
there.
So
we
can
go
in
and
take
a
look
at
this,
so
the
pipeline.
What
it
does
is
it's
gonna
start
with
a
snapshot
build,
so
you
can
see
all
the
outputs
of
these.
So
we
do
a
snapshot.
Build
first
is
to
ensure
that
the
container
image
will
actually
build
right.
The
java
code
will
build
because
if
that
fails,
there's
no
point
deploying
anything
into
it
to
canada,
so
we'll
just
fail
it
right
away.
So
here
we're
doing
a
snapshot
build.
So
it's
gonna.
C
Do
a
maven
release.
It's
gonna
do
a
a
docker
build
as
well.
B
C
That's
right,
if
you,
if
you
see
this
parameter
in
the
template,
so
all
of
these
are
part
of
that
initial
automation
that
we
create
the
reference
implementation.
This
is
for
java
and
then
all
of
that
comes
out
of
the
box.
So
you
see
this
is
all
based
on
a
template,
so
it's
git
ops
right,
so
they
use
template
out
of
the
box
and
they
can
control
all
these
different
parameters
within
it
for
their
pipeline.
C
Good
point
evan.
So
in
this
case
the
snapshot
build
is
successful
and
after
that
it's
going
to
run
a
build
to
deploy
to
dev
environment,
and
you
have
certain
controls
right.
This
is
one
I
did
earlier.
So,
for
example,
you
can
we
don't
need
to
deploy
to
qa
right,
so
we
just
we
can
just
reject
it.
So
qa
has
a
manual
step
where
somebody
has
to
approve
it.
You
know
they
have
to
make
sure
that
you
have
enough
code
coverage.
Qa
will
accept
that
build.
C
So
remember
we
have
this
configuration
here
that
for
dev
it's
gonna
do
very
simple
right
for
time's
sake,
we're
just
gonna,
do
a
build
and
we're
gonna
take
that
image
and
deploy
it
into
k
native
into
the
dev
environment,
oh
yeah,
and
then
here
here's
that
dev
environment,
so
I'm
gonna
watch
that
so
it
is
currently
running,
and
this
thing
is
gonna
show
us
that
it
will
deploy
the
new
one
right.
So
it's
this
is
running
currently
in
our
dev
environment.
B
C
That's
right,
good
point,
so,
for
the
instrumentation
every
function
that
we
release
out
of
the
box
is
a
monitor
right,
so
you
can
have
health
monitoring
of
your
function.
So
remember
I
talked
about
blue
green,
so
this
dashboard
shows
you
what
color
we're
on
right,
because
we
don't
want
developers
to
be
confused,
go
into
the
wrong
cluster
and
typically
we
once
we
deploy
one
cluster.
We
shut
down
the
other
clusters,
so
they
won't
get
confused.
So
you
have.
The
developers
have
full
visibility
into
the
health
of
these
the
clusters
and
their
their
functions
right.
C
You
can
have
you
can
look
at
different
name
spaces,
different
functions,
so
full
visibility
out
of
the
box.
You
don't
have
to
you,
know
care,
or
I
mean,
depending
on
how
much
you
want
to
care
right.
You.
If
you
just
worry
about
the
code,
you
don't
care
about
how
any
of
this
stuff
works.
So
we
simplify
it
for
you.
But
if
you
want
to
dig
into
the
log
right,
for
example,
this
is
that
function.
You
have
full
visibility
into
the
log.
C
C
So
if
you
have
errors
fatal
errors
in
the
log,
you
will
receive
alert
because
in
the
onboarding
remember
you
have
to
pull
in
your
contact
information
so
that
we
use
that
and
I
set
up
an
alert
for
you
and
then
this
is
more
of
okay
native
related
alerts,
and
if
we
go
back
to
this
so
so
the
build
is
done.
What
happens
is
after
the
docker
build,
we
will
publish
the
artifact
into
artifactory
and
that
is
used
for
for
the
deployment
into
the
dev
environment
and.
C
That's
right,
that's
right!
So
the
same
so
remember
we
did
a
snapshot
where
the
snapshot
is
making
sure
that
your
build
is
clean
and
then
once
we
have
that,
if
we
remember
this
pipeline
file,
you
will
use
the
same
artifact.
You
won't
have
to
build
again.
You
use
the
same
artifact
and
deploy
it
all
the
way
through
that
that
would
be
your
gold
copy,
essentially
yeah
and
then
so.
This
is,
let's
see
how
this
is
doing.
C
C
So
this
is
kind
of
a
high
level
cluster
level
view,
and
then
you
can
also
view
your
functions
and
drill
down
to
the
health
of
each
of
your
functions
as
well
so
yeah,
I
won't.
I
won't
give
you
too
much
information
here,
so
we
go
back
to
the
deployment
so
yeah.
It's
let's
see
so
this
usually
takes
about
three
minutes.
So
it's
deploying
the
service.
Now
I
think
in
here
I
put
in
a
destroy
command
yeah.
C
C
C
C
Let's
see
if
we
look
at
the
traces
we're
at
1
33,
yes,
so
this
is
that
ratings
right
we're
putting
pulling
back
some
sample
ratings.
You
see
this
there's
a
bit
of
a
cold
start
right,
so
the
first
one
took
one
second
and
then
the
subsequent
one
took
milliseconds.
So,
let's
drill
into
it
right,
so
the
first
one
you
can
see
majority
of
the
time
is
spent
on
java
right.
So
this
is
your.
C
This
is
your
select
statement
database,
which
is
pretty
quick,
but
then,
if
we
look
at
this
one
here,
this
is
your
crud
method,
pulling
back
all
the
data,
so
that
took
majority
of
time
for
one
second.
But
then,
if
we
look
at
the
other
one,
this
is
after
the
code
start
right.
So
you
look
at
this
one.
This
total
was
nine
milliseconds.
C
C
C
Oh
just
one
more
slide,
okay,
so
this
technology
is
very
new
and
exciting
right.
I
think
a
couple
of
takeaways
one
is
that,
with
that
open
contribution
model,
right
teams
are
able
to
come
in
and
contribute
and
really
get
excited
and
get
onboarded
with
this
right.
So
we
have
90
of
the
application
in
scope
for
this
year,
meaning
90
of
our
application
portfolio
have
either
started
or
completed
fast
function.
So
if
we
look
at
a
team
view
right,
50
percent
of
our
teams,
all
of
our
teams
have
done
fast
functions
already
right.
C
C
B
Okay,
so
yeah,
so
we've
talked
a
bunch
about
you
know:
oh
hey
s,
p
has
built
this
fast
platform
and
they're
using
k
native,
but
what
does
that
actually
mean
so
k
native
is
a
system
for
building
serverless
http
applications.
B
So
when
we
were
starting
the
project,
it
seemed
pretty
clear
to
us
building
an
open
serverless
system
that
we
wanted
to
build
on
some
pretty
robust
standards
and
http
kind
of
struck
us
as
the
obvious
way
to
get
requests
in
and
out,
because
it's
well
understood
and
it
keeps
evolving
and
improving.
So
it's
not
like.
We
picked
a
standard,
and
you
know
it's
going
to
be
the
same
three
years
or
five
years
from
now.
B
Similarly,
we
bet
on
kubernetes
for
container
scheduling,
because
we
knew
that
was
going
to
keep
improving
and
we
wanted
to
be
on
the
ocean
where
the
tide
was
rising
and
that
would
lift
our
ship
as
well,
but
we
also
wanted
to
specialize
kubernetes
more
than
more
than
kubernetes
itself
is
sort
of
here.
Here's
a
tool
you
can
do
anything
with
it
and
you're
like
I
can
do
anything,
but
you
most
of
the
time
you
don't
want
to
do
anything.
B
You
have
a
specific
thing
you
want
to
do,
and
so
we
wanted
to
make
it
a
sharp
specific
tool
for
cases
where
you
were
building
something
that
was
basically
a
12-factor
application
that
you
were
willing
to
speak
hdp
that
you
didn't
need
to
keep
local
state
and
that
we
could
make
things
a
lot
simpler
and
so
the
first
place
this
shows
up
is,
I
don't
know
how
many
of
you
define
kubernetes
deployments
on
a
regular
basis.
This
is
kind
of
the
smallest
simplest
deployment
you
could
have
over.
B
On
the
left
hand,
side
you've
got
a
deployment,
but
if
you
want
to
talk
to
it,
you
actually
also
need
a
service,
and
you
need
to
have
a
bunch
of
selector
labels
and
you
need
to
match
some
ports
and
stuff
like
that,
and
so
you
have
at
least
two
objects.
You've
got
to
keep
in
mind
and
you
have
to
think
about
labels.
B
B
You
basically
say
you
know
hey.
This
is
a
k-native
service,
that's
the
api
version
in
kind.
Here's
its
name
run
this
container
and
it
speaks
http
and
then
there's
a
bunch
of
convention
and
a
little
bit
of
magic
in
there,
and
you
get
some
services
that
you
don't
get
from
a
standard.
Kubernetes
deployment
like
an
auto
scaler.
You
could
go
and
figure
out
horizontal
pod
autoscaler.
Oh,
look.
We
just
added
a
little
bit
more
yaml
to
the
left-hand
side,
so
we
mentioned
auto
scaling.
B
You
sort
of
get
this
to
some
extent
with
ingress
for
management
of
http
hostnames.
Again
we
just
added
another
object.
You
have
to
think
about
to
the
kubernetes
space,
so
all
those
things
are
useful
when
you
need
to
do
something
strange,
you
know
when
you
say:
hey,
I'm
running
a
bunch
of
game
servers.
Each
game
server
is
kind
of
independent,
but
they're
kind
of
together.
B
Kubernetes
is
a
great
fit
for
that.
But
if
you're
building
http
applications,
k-native
simplifies
that
we
also
build
in
some
tracking
of
previous
states
every
time
you
do
an
update,
it
creates
a
new
revision.
So
that's
a
little
bit
like
what
deployment
does,
but
it's
a
little
easier
to
find
them
and
go
back
to
the
earlier
ones
and
their
the
garbage
collection
policy
can
be
date-based
and
and
over
time
without
having
to
think
about.
You
know
how
many
replica
sets.
B
Am
I
going
to
keep
around
k
native
automates
a
lot
of
that
and
we
also
automated
a
bunch
of
the
roll
out
stuff
between
them.
Deployment
basically
has
one
has
a
policy.
That's
you
know,
hey
we'll
just
start
restarting
things
and
we'll
get
you
up
to
the
number
you
need
and
since
we're
serverless
and
we
sort
of
start
from
zero,
it's
easy
to
just
start
a
new
pool
and
scale
that
up
and
then
the
old
ones
go
away
when
they
need
to.
B
B
Also,
since
we
do
knew
we
were
doing
http
and
we
knew
that
it's
2020
we
built
in
integration
with
cert
manager
and,
let's
encrypt,
so
that
you
can
automatically
also
get
all
your
ssl
handled
without
developers
having
to
get
involved,
and
you
can
have
just
a
single
wildcard
cert
that
covers
all
of
your
functions
in
all
of
your
domains
and
let's
talk
a
little
bit
now
about
how
s
p
actually
ended
up
using
this.
B
So
they
talked
about
having
blue
and
green
environments.
So
you
can
see
in
the
picture
down
at
the
bottom.
You
know
in
their
dev
environment,
they'd
have
cluster
one
and
cluster
two
and
one
would
be
blue
and
one
would
be
green,
and
so
they
have
a
specific
dns
zone,
fazz,
blue
or
faz,
or
faz
green.
That
lets
you
hit
a
specific
one
of
these.
You
know
specific
one
of
these
clusters.
B
They
also
have
a
top-level
dns
zone
for
the
environment,
so
dev
has
a
different
domain
than
uat
or
production,
and
so,
if
you
don't
want
to
have
to
think
about,
you
know
already
in
blue
or
are
we
in
green?
You
can
just
hit
that
top
level
thing.
If
you
need
to
go
into
details,
it's
there
and
they
integrated
with
aws
certificate
manager
to
do
the
provisioning
out
of
the
box
k,
k-native
ships
with
an
integration
with
let's
encrypt,
using
jetstack
cert
manager,
which
is
great.
B
If
you
have
an
internet
connected
cluster,
and
you
don't
need
too
many
certs
in
the
you
know.
First,
for
a
company
like
s
p,
they
can
afford
to
send
a
few
dollars
aws
away
to
get
to
get
higher
rate
limits
and
certain
guarantees
that,
let's
encrypt,
just
isn't
set
up
to
give
their
goal,
is
to
encrypt
the
internet,
but
not
necessarily
to
run
financial
business.
So
you
know
find
the
right
tool
and
one
of
the
goals
from
with
building
canadia
was
that
you
should
be
able
to
customize
this
stuff.
B
So
I
think
that
was
a
success
and
now
we're
going
to
talk
a
little
bit
about
what
the
data
path
looks
like
for
k-native
serving
because
we've
talked
a
whole
bunch
about
it
being
serverless
and
let's
see
what
that
actually
means.
B
So
the
first
goal
for
handling
requests.
I
call
this
life
of
a
query.
I
got
my
start
at
google
and
one
of
the
first
talks
that
you
get
is
here's
what
it
looks
like
when
you
actually
do
a
search
query?
So
I
always
call
it
life
of
query.
But
the
goal
for
steady
state
is
that
things
should
look
pretty
close
to
the
same
cost
as
if
you
were
just
using
raw,
vms
or
raw
kubernetes.
B
So
a
load
balancer,
splits
stuff
across
your
http
routing
layer
and
stuff
gets
sent
to
a
user
container
and
that's
all
lovely
and
good
and
then
so.
The
next
question
is,
you
know:
okay,
lots
of
traffic
comes
in.
You
know
that
goal
of
3000
concurrent
users,
for
example,
that
mark
was
talking
about.
B
How
do
we
actually
count
those
users
and
then
make
sure
that
we've
got
the
right
number
of
containers
and
k-native?
Does
this
by
injecting
this
little
proxy
in
front
of
the
user
container
and
being
able
to
count
all
the
requests
and
feed
that
back
into
a
based,
auto
scaler?
So
if
you're
familiar
with
the
kubernetes
horizontal
pod,
auto
scaler
by
default,
that
will
cue
off
of
cpu
or
possibly
off
of
a
custom
metric,
but
you
have
to
do
a
bunch
of
plumbing
to
get
your
custom
metrics
in
there.
B
B
You
have
to
write,
write
your
code
with
the
assumption
that
it's,
oh
there's,
only
going
to
be
one
request
going
on
in
a
process
at
a
time,
because
that's
how
lambda
works,
but
lots
of
people
actually
like
that,
because
it
means
that
if
you
want
to
have
globals
for
stuff
or
you
just
want
to
know
that
you
aren't
going
to
get
interfered
with
anyone
else,
you
have
you
know
a
container
that
wants
one
request
at
a
time.
We
wanted
to
support
that
in
k
native,
so
q
proxy
also
lets
us
enforce
hey.
B
You
know
you
may
have
50
http
routers,
but
you're
only
going
to
get
one
request
per
container
at
a
time.
You
can
also
crank
that
up.
If
you
want
to,
you
can
say,
hey,
50
or
100,
you
know
look
I've
written
this
in
java.
It's
all
reentrant,
you
know
let
it
go
to
a
thousand
or
keep
it
at
one
and
the
default
is
to
assume
it's
reentrant.
B
But
it's
easy
to
just
say:
you
know
one
request
per
container.
So
now
now
you
know:
okay,
that's
all
nice
kind
of
nice
ergonomics.
B
B
So
this
is
where
one
of
the
clever
tricks
of
k-nativ
is
that
we
run
a
single
activator
for
your
entire
kubernetes
cluster
or
possibly
a
replicated
set
of
them,
but
a
small
number.
When
you
add
a
new
function,
it
doesn't
add
a
new
activator,
the
activator
is
shared
and
it
will
pause
the
request
and
it'll
say:
hey.
B
There's
no
instances
of
this
kubernetes.
Please
go
talk
to
cubelet
and
actually
get
a
pod
ready
and
once
that
pods
ready,
then
the
activator
will
forward
the
request
along
so
you'll
see
a
longer
response
time
for
those
requests,
but
they
won't
get
dropped
on
the
floor,
and
so
we've
talked
about.
Okay,
we
had
zero,
we
want
many.
How
do
we
get
there?
B
How
do
we
do
the
opposite?
No
requests
have
come
in
and
we've
shut
down.
It
turns
out
that
actually
takes
a
little
bit
of
a
clever
dance
as
well,
because,
first,
you
need
to
add
the
activator
in
after
the
activator
is
hooked
in
and
the
http
routers
all
know
about
it.
Then
you
can
scale
things
down
to
zero,
the
rest
of
the
way
and
k
native
tests
and
handles
all
of
that.
B
Maybe
you
only
want
to
do
10
of
traffic
anyway,
so
that
you
tell
people
reload
a
few
times
and
it
should
work,
but
you
only
have
four
pods
so
with
standard
kubernetes
services
they're
all
round
robin,
and
if
you
do
replace
one
pod,
it's
a
25
roll
out.
Since
we
control
that
http
router,
we
can
actually
and
it's
something
envoy
based
so
far.
We
can
actually
replace
things
in
there
and
say
do
10
and
90
percent.
B
I
just
kind
of
hinted
at
this
one:
we
support
multiple
hdb
routers,
so
contour
istio,
ambassador
glue
and
red
hat
wrote
a
really
lightweight
one
called
courier.
That's
specifically
focused
for
being
used
by
k-native.
We
want
to
integrate
with
your
http
router.
We
don't
want
to
tell
you.
Oh
you've
made
a
big
investment
in
istio
mesh
or
you've
made.
You
know
a
big
investment
in
glue
or
contour,
or
something
like
that.
Sorry
throw
it
all
away.
B
So
we
have
adapter
layers
for
all
of
these
and
you
can
choose
which
one
you
want
to
install
and
I'm
going
to
blaze
through
this,
because
I
have
one
minute
left,
maybe
and
then
we'll
have
time
for
questions
at
the
end,
so
the
mental
model
for
k-native
is.
I
showed
you
that
service
a
service
is
made
up
of
a
route
which
is
the
networking
part
and
a
configuration
which
tracks
basically
how
you
want
things
to
run
and
the
configuration
creates
additional
revisions.
B
Every
time
you
update
it,
the
route
lets
you
pick,
which
thing
receives
traffic.
It
can
either
be
the
latest
revision
or
a
specific
revision.
We
call
running
with
the
latest
revision
at
100
yolo
mode.
You
only
live
once,
but
it's
also
really
handy
for
for
development,
and
you
know
stuff,
that's
not
really
critical
and
that's
the
end,
we're
available
for
questions
and
anything
else
that
you
know
feel
free
to
use
the
q
a
or
chat
or
stuff
like
that
to
get
in
touch
with
us
and
we've
got
contact
info
afterwards.
B
C
A
Cool
yeah,
okay,
yeah
thanks
evan
and
mark
for
great
presentation
and
demo.
So
we
have
now
some
time
for
question.
If
you
have
some
question
around
this
topic,
yeah
feel
free
to
drop
in
your
clinic
tab.
Okay,
we
got
a
one
question:
just
came
up
who
who
will
take
this
one.
B
B
As
always
the
answer's
going
to
be
it
depends.
I
will
say
that
the
the
kubernetes
horizontal
pod
autoscaler
may
be.
B
I
don't
know
what
knobs
there
are
to
tune,
how
frequently
it's
checking
to
decide
if
it
should
rescale
an
application.
I
know
that
the
k-native
autoscaler
team
looks
at
that
and
has
tests
that
are
basically
hey
we're
at
zero.
What
happens
if
we
dump
a
thousand
requests
per
second
on
a
k
native
cluster
with
you
know,
container
concurrency
set
to
100
or
something
like
that.
So
how
fast
do
we
get
to
10?
B
Instead,
you
know
to
the
10
or
12
instances
that
we
should
have
for
handling
that
much
load,
and
so
they
have
a
pretty
fast
cycle
on
the
metrics
collection
and
they've
recently
migrated
that
to
grpc
from
hdp
to
get
additional
efficiency
benefits
because
they
were
finding
that
for
large
clusters.
It
was
too
slow.
B
B
I'm
assuming
that
mark
that
s
p
is
using
the
default
auto
scale
or
the
k
native
one
and
not
yes,.
C
B
The
goal
when
we
started
k
native
was
to
be
able
to
match
lambda's
performance
eventually
and
that's
going
to
take
more
effort
and
getting
down
into
kubernetes
as
well.
Some
of
the
limitations
we
see
today
are
around
things
like.
How
long
does
it
take
to
schedule
a
pod
and
pull
the
image,
and
so
I
think
that
there
are
one
or
two
caps
that
are
percolating
about.
How
do
we
make
that
faster?
B
For
example,
readiness
probes,
if
you
use
the
kubernetes
readiness
probe
you've
got
a
minimum
of
a
second
for
your
service
to
become
ready
and
we'd
like
that
to
be
below
a
second,
you
know
every
100,
milliseconds
or
so,
and
currently
the
activator
goes
and
does
that
check
even
on
unready
members
of
the
service
to
see
if
it
can
can
race
and
beat
kubernetes
propagation
time
on
services.
C
I
think
one
of
the
things
with
k-native
is
that
with
lambdas
and
azure
functions
of
the
world,
you're
very
limited
with
k-native,
we
can
do
so
much.
We.
We
have
full
control
over
what
we
put
in
there
and
we're
even
talking
about
with
pivotal
about
trying
to
put
a
power
builder
in
there,
because
we
have
legacy
apps
right.
We
want
to
modernize
and
there
are
different
ways
of
modernizing.
It
gives
us
a
lot
more
flexibility,
basically
because
it's
in
our
full
control-
and
we
can
do
whatever
you
want
with
it-
and
it's
multi-cloud.
B
C
B
It's
definitely
been
an
interesting,
oh
there's,
another
question,
so
I'll
stop
just
convincing
what
percent
of
compute
or
app
estate
could
go
to
k
native
or
serverless.
Ultimately,
I'm
going
to
pass
that
over
to
mark
first
to
talk
for
s
p,
and
then
I
can
have
some
unfounded
opinions
about
the
state
of
the
industry.
C
Yeah,
so
from
our
experience
they
are
other
than
the
commercial
off
the
shelf
right,
if
you
think
about
breaking
capabilities,
so
we
are
more
mapping,
capabilities
right
and
then
even
the
first
stage
is
almost
like
cut
and
paste
like
you,
take
your
java
code
cut
and
paste
into
your
dot
net
code.
But
then,
when
we
look
at
it
right,
what
percentage
really
depends
on
what
percentage
of
your
apps?
You
know
what?
What
is
the
current
state
of
the
target
state
so
target
state
for
us
is
100
right
other
than
the
commercial
off
the
shelf.
C
B
I've
I've
also
moved
some
commercial
off
the
shelf
stuff
to
a
serverless
environment.
If
it,
if
it
can
be
containerized
and
and
is
you
know,
stateless
containers.
B
It's
a
bad
idea.
I
ran
jira
once
like
that,
but
it
turns
out
that
jira
keeps
a
bunch
of
local
caches,
so
that
was
an
unsuccessful
final
experiment,
because
your
issues
wouldn't
show
up
until
somebody
told
it
to
re-index.
B
So
so
I
would
say,
if
you
have
off-the-shelf
software,
try
it.
You
may
discover
that
there's
that
that
it's
not
successful,
but
some
of
it
actually
does
work
pretty
well
in
terms
of
overall,
I
would
say
that
k
native
specifically
the
current
k-native
serving
might
be
about
30
of
sort
of
all
workloads
where
all
workloads
could
also
include
stuff,
like
you
know,
databases
and
storage
and
stuff,
I
think
there's
other
places
where
things
could
get
more
serverless
than
they
are
today.
B
Looking
at
stuff,
you
know
a
system
like
pulsar
or
kafka,
or
active
mq
that
scaled
out
just
automatically
from
one
node
to
many,
without
having
to
think
a
lot
about
how
many
partitions
do
I
have
would
be
a
really
cool
serverless.
You
know
distributed
log
system
and
I'd
love
to
see
one
of
those.
B
B
Are
stateless
like
12
factor
and
the
assumption
is
you
can
start
two
or
five
or
10
instances,
and
when
you
don't
need
them,
you
can
just
shut
them
down.
So
if
you
need
to
have
state
or
shared
state
using
something
like
memcache
or
redis,
if
you
need
lightweight
stuff
or
just
using
you
know
your
database
to
share
state
or
an
object
store
to
share
state
keeping
state
in
your
process
or
on
disk,
not
such
a
good
idea.
So
don't
do
that.
B
And
then
there's
a
question
about
plants
that
support
fast
and
k-native
on
esxi
through
project
pacific.
Well,
so
two
answers:
you
can
run
k-native
on
tkgs
clusters
today
that
works.
We
use
it
for
some
of
our
internal
testing
in
terms
of
product
plans.
B
I'm
gonna
have
to
point
you
back
to
a
vmware
product
manager.
Valentina
ellaria
would
be
probably
the
best
but
feel
free
to
send
me
email,
and
I
can
connect
you
with
the
right
vmware
product
managers,
because
I
don't.
I
mostly
focus
on
the
open
source
software
and
I
don't
keep
track
of
vmware's
product
plans.
C
B
Oh,
you
mentioned
tcp
one
of
the
fun
things
with
k-native.
If
you're
a
network,
if
you're
a
networking
geek
is
we
actually
support,
http,
http,
2
and
websockets
all
at
the
same
time,
which
is
kind
of
a
fun
combination
of
spaces
to
talk
about,
because
there's
lots
of
different
ways
to
stream
things
back
and
forth,
even
though
you
are
just
speaking
htp
and
that's
one
of
the
great
things
about
having
such
a
broad
ecosystem
to
pick
from.
B
You
can
auto
scale,
you
know
streaming
responses
which
is
yeah.
I've
seen
I've
seen
good
use
cases
for
it.
I've
also
seen
that
start
to
bleed
into
the
stateful
space
a
little
bit
more
than
I'm
comfortable
with
so
with
great
power
comes
great
responsibility.
B
A
B
A
Yeah
perfect,
so
all
right
and
that's
all
the
question
we
have
time
for
today
and
thanks
for
joining
us
once
again
the
webinar,
recording
and
slash
there
will
be
online
later
today
and
we
are
looking
forward
to
seeing
you
at
the
future
cnc
webinar,
but
also
we
have
a
kubecon
and
clown
that
we
call
north
jamaica
next
month
and
november
17th
and
we're
looking
forward
to
seeing
you
once
again
have
a
good
rest
of
the
day.
Thank
you
and.