►
From YouTube: OpenShift Commons Gathering Santa Clara 2019 State of Serverless Brian RedBeard Harrington
Description
OpenShift Commons Gathering Santa Clara 2019
State of Serverless
Brian Redbeard Harrington (Red Hat)
A
So
hi
I'm
red
beard
and
we
are
going
to
talk
about
serverless
and
came
native
stuff,
so
server
list
service,
mesh
sto
we've
got
a
whole
bunch
of
things
here.
So
let's
just
dive
in
and
talk
about
service
mesh
to
get
started
so
service
mesh
is
a
component
of
all
of
these
things
we
kind
of
they
build
on
top
of
each
other.
So
it
makes
sense
to
kind
of
talk
about
this,
where
it's
at
etc,
service,
mesh
kind
of
means,
different
things
to
different
folks.
A
I
actually
had
some
pretty
interesting
conversations
about
a
month
and
a
half
ago
with
folks
from
the
Red
Hat
networking
team
who
were
like
hey,
we,
you
know
we're
building
a
service
mesh,
and
you
know
it's
all
doing
MPLS
between
things.
I
was
like
whoa
whoa
doggie,
like
that's,
not
the
same
beast
like
overloading
terms
here
so
kind
of
where
we
are
going
with.
This
is
taking
to
solve
problems
with
this.
A
You
know
everybody
and
their
sister
has
a
application
server
which
they
have
put
all
kinds
of
workloads
on,
and
then
they
have
built
their
micro
services
architecture,
or
at
least
they
aspire
to,
and
what
that
ultimately
means
is
that
they're
really
going
from
a
micro
services
architecture
to
a
distributed
architecture.
So
when
we
are
building
things
in
a
distributed
architecture,
though
it
means
we've
got
a
lot
of
moving
parts
with
lots
of
services
going
all
over
the
place.
A
And
now,
all
of
a
sudden,
your
security
team
is
asking
you
about
how
you
are
controlling
the
communications
between
all
of
these
components,
and
then
you
tell
them.
I
am
and
start
going
out.
How
am
I
going
to
do
this
before
they
find
out?
Oh
well,
it's
on
the
cloud.
So
all
of
my
I
am
you
know.
Roles
are
taken
care
of
there
and
you
know
all
my
security
groups
for
ec2.
Oh
wait!
That
only
does
the
services
I've
been
foiled
again.
So
what
we
do
is
we
introduce
the
concept
of
a
service
mesh?
A
The
goal
there
is
to
have
a
dedicated
network
for
all
the
services
service
communications.
So
it's
pretty
simple,
you
know
it
is
not
necessarily
an
overlay
Network.
We've
got
enough
of
those
within
kubernetes
as
it
is
so
we're
not
going
to
pile
on
one
more
overlay
network,
but
you
know
it
starts
to
make
a
little
bit
more
sense,
at
least
from
a
technology
perspective.
A
I
won't
say
more
but
differently,
where
you
know
when
I,
you
know,
if
we
actually
go
back
and
kind
of
look
at
this,
you
know
how
one
service
is
talking
to.
Another
ultimately
means
that
you're
going
to
have
to
worry
about
actually
capturing
traces
between
those
components.
Pretty
soon
you're
going
to
start
actually
wondering
about
throughput
between
components,
the
applications
latency
between
those
even
more
than
you
did
in
your
previous
era
of
having
all
of
this,
it's
pretty
easy
to
trace
latency
between
your
HTML
and
one
of
the
services
on
that
application.
A
Server
fire
up
your
Java
debugger
you're
good
to
go
when
you're
doing
this
across
a
network
of
machines.
That
becomes
a
pain
in
the
button,
and
that's
why
we
start
introducing
things
like
Jager
to
actually
be
able
to
trace
through
all
of
our
components.
It's
why
we
have
components
like
Prometheus
and
Griffin.
Are
there
to
actually
capture
our
metrics
and
be
able
to
visualize
the
metrics?
You
know
this
is
also
just
how
we
happen
to
be
viewing
this.
You
know
if
you're
at
uber
you're
going
to
be.
A
You
know,
looking
at
your
actual
time,
series
storage,
you
know
if
you're
kind
of
looking,
if
you're,
already
heavily
invested
in
to
grow
fana,
possibly
Loki,
is
going
to
make
more
sense
instead
of
Prometheus.
You
know
this.
This
is
just
as
I
said.
The
way
that
we
view
it
at
Red,
Hat
and
key
Olly
is
interesting.
There,
in
that,
it
is
a
net
new
project
created
by
Red
Hat
for
visualization
of
the
components
of
the
service
mesh
itself.
A
So
it's
a
meta
project
there,
where
on
its
own,
it
is
not
going
to
provide
you
anything
in
a
standalone
state.
It
only
really
brings
value
when
you
start
using
it
with
something
like
Sto
so
continuing
on
from
there
we've
talked
about
you
know
what
happens
when
you
have
that.
But
the
real
question
becomes
what
happens
when
you
have
that
and
that
one
team
of
ninjas
over
in
the
corner,
who
refused
to
write
things
in
Java
because
they
live
in
the
future
and
they
want
all
of
the
memory
guarantees
of
rust.
A
A
But
you
also
don't
want
to
necessarily
force
all
of
the
other
worker
bees
to
change
what
they're
doing,
because
they
take
those
existing
apps
and
they
make
them
run.
What
do
you
do
about
that?
Well,
ultimately,
it
is
easiest
to
try
to
address
the
complexity
of
those
problems
at
the
infrastructure
level
and
what
I
mean
by
that
is,
rather
than
being
in
a
state
where
you
know
you
have
a
bunch
of
different
libraries
that
you're
pulling
into
every
single
one
of
your
application
components
and
then
that
team
building
things
in
rust
goes.
A
Oh
yeah,
I
can't
use
those
Java
libraries,
so
I'm,
not
gonna,
be
participating
in
tracing
and
I'm,
not
gonna,
be
participating
in
any
of
the
surface
discovery
or
anything
else.
But
it's
fine
because
we're
smart
and
we're
gonna
do
it
ourselves
and
we're
gonna
completely
reinvent
that
wheel,
and
then
you
get
to
kowtow
to
us.
A
Nah
man,
that's
not
happening.
What
we
will
do
is
we
will
take
all
of
those
things
and
we
will
push
them
down
to
an
infrastructure
layer
so
that
we,
you
now
say
you
know
nobody
has
to
pull
in
those
libraries
and
that's
fine,
doesn't
matter
if
you're
on
rust
or
Java
or
NGO
or
node
or
whatever.
We
will
take
those
things
and
kind
of
inject
them
at
the
actual
infrastructure
level,
and
everything
will
be
good,
says
right
beard
because
I'm
not
running
your
infrastructure.
Oh
sorry,
that
one
fell
over
flat.
A
A
What
we
can
do
with
that
is.
We
can
actually
colic
eight
multiple
containers
together
within
a
single
pod,
one
of
those
things
that,
like
academically,
we
all
understand
that
in
a
practical
level,
we
generally
don't
use
it
all
that
much.
But
here
is
a
case
where
now
we
can
take
and
put
a
proxy
into
the
actual
pod
that
our
application
components
are
running
in
and
schedule
it
in
such
a
way
where
it
is
actually
mutating
the
network.
A
Namespace
of
that
pod
and
controlling
the
ingress
and
egress
of
traffic
from
that
location,
and
we'll
talk
a
bit
more
about
how
that
does
that.
So
what
this
begins
to
look
like
is.
We
have,
as
we
do
with
any
good
distributed
system,
a
control,
plane
and
a
data
plane.
A
failure
of
the
control
plane
does
not
affect
the
running
of
the
data
plane.
Jagr
falls
over
your
application.
Does
not
pilot
falls
over
your
application,
does
not
mix
or
falls
over.
A
Your
application
does
not,
unless
you've
told
it
to
fall
over,
because
you
absolutely
want
a
die-hard
enforcement
of
policy
where
you
want
things
to
fail
closed,
because
it's
better
to
make
sure
that
everybody
gets
locked
in
that
building
on
fire
than
allowing
them
to
get
out
strike
that
reverse
it.
So
what
this
begins
to
look
like
in
the
context,
though,
of
openshift
or
at
large
or
kubernetes
in
general,
is
we
actually
put
these
in
place
the
control
plane
and
then
those
services
that
are
getting
injected
the
network
level,
in-between
openshift
and
your
applications?
A
What
this
means
is
that
now,
just
because
you
use
kubernetes
doesn't
mean
you
have
to
use
these
components
while
at
the
same
time
they
can
be
enabled
on
a
project
by
project
or
a
namespace
by
namespace
basis.
So
now
you
can
have
these
teams
of
individuals
begin
to
opt
into
the
use
of
all
these
components
without
prescribing
it
across
the
entire
organization
and
the
reasons
why
you're
going
to
do
this
begin
with
one
of
the
most
important
ones,
and
that
is
fault
tolerance.
A
So
now
we
can
put
in
place
an
idea
of
circuit
breakers
directly
at
the
infrastructure
level.
So
for
anybody
who
has
mucked
around
with
their
house
you'll
know
that
when
you
try
to
draw
too
much
amperage
over
a
wire,
a
circuit,
breaker
trips,
why
does
that
happen?
Well,
we
decided
a
long
time
ago
through
the
National
Electrical
Code,
that
we
would
rather
have
pieces
of
hardware
fail
that
our
houses
burned
down
rather
than
drawing
so
much
traffic
through
that
copper
that
we
actually
caused
heat
and
then
the
heat
causes
fires.
A
We
decided,
you
know
we
would
rather
have
things
fail
in
a
known,
predictable
way,
so
that
we
can
react
to
those
failures
and
that's
the
same
idea
as
a
circuit
breaker.
So
now
we
can
actually
implement
these
circuit
breakers
at
the
network,
so
that,
when
a
service
fails,
for
example,
all
communications
between
service
B
and
service
C
stop.
A
We
can
actually
work
around
that
so
that
we
can
say
you
know
it
would
be
nice
if
service
C
ran,
but
ultimately
we
can
make
it
work
if
service
a
bypasses
that
and
goes
around
it
because,
for
example,
we
would
rather
have
better
response
times
or
we
don't
really
need
that
one
little
bit
of
telemetry
that
we
would
have
gotten
off
the
service
beam.
It
is
better
to
fail
in
such
a
way
where
the
action
we
actually
bypass
that
then
gets
stuck
in
a
hung
state.
A
It
also
means
that
we
can
actually
implement
timeouts
and
retries
at
the
network
level,
rather
than
having
to
do
it
in
the
applications.
So
you
know
that
one
component
there
in
the
middle
you
know
service,
B,
the
developers,
a
service
B.
When
are
they
going
to
get
it
together?
They
just
will
not
fix
that
memory
leak
and
because
of
that
everything
starts
blowing
up
and
then
when
it
starts
blowing
up.
A
We
know
that
like,
if
you
just
try
it
again,
the
pods
gonna
be
recycling
about
three
times
a
minute,
so
you
know
let
it
retry
about
fifteen
seconds
later
it'll
get
to
it.
Everything
will
be
fine,
I
mean
sorry,
just
speaking
about
organizations
I've
consulted
with
in
the
past
I
know
I'm
not
talking
about
how
any
organizations
here
work,
but
I'm
saying
it's
possible
when
you
do
run
into
those
edge
cases
to
fix
that
or
being
able
to
provide
rate-limiting
again
service
B
just
keeps
getting
me
down.
A
Just
cannot
handle
that
the
amount
of
traffic,
so
you
know
we
have
to
scale
it
differently
and
work
around
it.
So
you
know
we
can
rate
limit
at
the
network
level.
You
know
what
is
happening
there
behind
the
scenes.
Is
that
service
a
has
in
its
configuration
I'm
going
to
make
a
request
to
service
B
and
then
the
proxy
component
known
as
envoy.
There
actually
sees
that
that
request
is
going
to
be
made
and
chooses
an
endpoint
and
directs
it
to
that
endpoint
in
such
a
way.
A
We're
the
only
way
that
service
a
is
able
to
egress.
That
pod
is
through
the
proxy
itself,
which
we'll
touch
on
here
a
little
bit
when
we
talk
about
the
security,
but
it's
how
we're
able
to
actually
provide
guarantees
around
rate
limiting
here
and
that's
it
security.
So
has
this
ever
happened
to
you
managing
a
CA
via
the
CLI,
the
command-line
interface,
where
you
now
need
to
begin
providing
attestation
of
all
of
your
workloads,
because
you
need
to
get
to
the
point
of
having
a
zero
trust,
network,
etc,
etc.
A
And
how
do
you
attest
these
workloads?
Easy,
your
security
team?
You
send
them
a
whole
bunch
of
information,
you
send
them
the
product
requirements
document
all
of
the
names
of
the
developers
who
are
working
on
it,
copies
their
drivers,
licenses,
etc,
and
now
you
know
that
when
they
get
around
to
it
and
they
click
the
button
or
run
that
command
line
interface
command,
and
then
they
send
you
that
private
key
insert
they've
already
messed
it
up.
You
never
transfer
a
private
key
material
over
the
network.
So
how
do
we
get
around
this?
A
What
we
do
is
we
have
the
pods
themselves
actually
use
the
attestation
mechanisms
of
the
kubernetes
api
to
be
able
to
do
this,
and
in
parallel
we
actually
utilize
the
components
of
sto
to
actually
schedule
a
certificate
authority.
Is
there
only
to
be
used
by
the
service
mesh?
This
is
not
for
end-users,
on
the
Internet
to
be
able
to
reach
in
and
talk
to
your
individual
endpoints.
This
is
not,
for
you
know,
components
of
your
actual
enterprise.
Software
stack
to
be
able
to
reach
directly
back
to
things
within
the
service
mesh.
A
It
means
that
the
components
of
Sto
can
actually
utilize
the
kubernetes
api
to
say,
I
know
that
the
service
account
for
service
b
is
the
only
user
who
should
ever
be
using
this
client
certificate,
and
because
of
that,
I
will
allow
service
a
to
create
a
certificate,
signing
request
and
an
RSA
private
key,
and
then
we
will
allow
for
that
csr
to
be
transferred
across.
We
verified
that
it
was
created
by
the
surface
account
that
we
thought
it
was,
and
then
we
issued
the
public
cert
hunky-dory.
A
Now
we
have
automated
mutual
TLS
authentication
transparent
to
all
of
these
services.
What
it
ultimately
looks
like
for
folks
who
are
familiar
with
this
is
something
kind
of
like
s
tunnel.
Only
now,
rather
than
having
the
shotgun
blast
mechanisms
of
saying
configure,
probably
fine,
we
actually
have
components
that
know
the
provenance
of
all
of
our
workloads
and
know
when
search
should
be
issued
and
when
they
should
not,
and
they
can
take
it
and
they
can
go
nuts
now.
It
also
means
that
do
get
back
over
to
the
right
window.
There
we
go.
A
A
A
Right
now,
one
you
go
with
one
of
the
historic
incumbents
or
you
go
with
something
like
Acme,
which
is
more
commonly
known
as
let's
encrypt
the
beauty
of
let's
encrypt
is
the
automation
is
the
fact
that
it
is
using
attestation
mechanisms
like
DNS
ownership,
because,
ultimately,
if
somebody
has
taken
control
of
your
DNS,
they're
gonna
route,
that
traffic
anywhere,
they
want
to
anyways,
but
also
utilizing
other
capabilities
like
verifying
the
actual
challenge
response
on
individual
services
etc.
To
be
able
to
automate
that
workflow
and
it
works
really
really
well.
A
A
On
the
other
hand,
humans
would
not
be
able
to
reissue
those
that
quickly,
and
it
means
that,
in
the
event
that
you
did
actually
have
a
malicious
actor
in
your
system,
you
can
actually
limit
the
scope
of
validity
for
which
they
can
actually
weaponize
an
attack.
It
is
not
to
say
that
you
remove
that
vector
of
attack,
but
it
means
that,
if
you're
in
a
situation
where
you
have
a
giant
java
application,
maybe
a
couple
April's
ago
and
you're
just
accidentally
not
patching
those
Java
libraries
and
somebody
gets
into
that
system.
A
It
means
that
when
that
surtax
pyres
that
access
will
be
revoked,
the
pod
will
be
recycled.
You
will
bring
a
new
one
up
and
they
are
free
to
try
to
weaponize
that
attack
again,
but
they
need
to
start
from
zero.
They
don't
get
to
start
from
six
links
deep
into
the
chain.
It
means
that
they
constantly
need
to
be
now
automating
their
own
weaponization,
which
eventually
they
will,
but
that's
the
nature
of
what
we're
doing
here.
We
are
trying
to
work
faster
and
stay
ahead
of
these
malicious
actors.
A
A
A
I'm,
not
saying
that
you
need
to
be
punched
in
the
mouth,
but
I'm
saying
having
a
friendly
helper
like
your
own
personal
chaos.
Monkey
who
does
not
have
the
same
force
to
bear
behind
that
will
at
least
allow
you
to
get
prepared
and
know
what's
going
to
be
coming
so
what
this
begins
to
look
like
is
we
have
situations
where
now
we
can
take
and
we
can
mute
the
network
traffic
going
between
services.
A
We
can
inject
delays,
do
this
transparent
to
the
services
so
that
we
can
start
to
see
what
happens
when
the
delays
are
there
or
what
happens
when
we
start
seeing
protocol
specific
errors
Wow?
What
is
5%
of
all
of
our
total
requests
coming
across
surfacing
for
hundreds
transient
error
is
telling
us
to
try
again
what
we
do.
How
to
react
are
those
errors
surfacing
all
the
way
back
to
customers?
Are
there?
Are
there
useful,
debug
messages
going
all
the
way
back
that
really
shouldn't
be
there?
A
Is
it
dropping
like
the
table
names
and
everything
in
the
easier
to
just
test
these
things
out
and
go
no
we're
good.
We've
run
through
this
we've
seen
all
of
it
in
practice
we
know,
and
then
we
get
into
the
bits
that
are
a
bit
more
fun
that
you
know.
I
save
this
for
last,
because
this
is
the
parts
that
folks
actually
get
excited
about.
A
You
know
the
kind
of
dynamic
routing
aspects,
I
love
picking
on
Dan
Walsh
Dan
is
our
Boston
employee
he's
over
there,
happily
working
away
on
SELinux,
and
we
decide
that
all
of
his
requests
going
inbound
to
Bugzilla.
We
are
gonna,
send
them
to
a
new
version.
Bugzilla
we're
gonna
send
version
5.
Everybody
else
is
on
four,
so
we're
gonna
make
him
suffer
pain
in
the
same
way
that
he
makes
all
of
you
suffer
through
writing
that
policy,
but
I.
Just
you
know.
A
What
we're
able
to
do
here
is
we're
actually
able
to
annotate
the
requests
that
are
flowing
through
our
system
to
be
more
strategic
about
which
versions
of
services
they're
flowing
to
so
we
can
say,
hey
the
entire
world
gets
goes
to.
You
know,
v1
of
bugzilla,
we're
good
I,
guess,
and
this
for
this
example
version
4
of
Bugzilla
but
Dan
and
everybody
else
in
that
Boston
office,
we're
gonna,
send
them
to
version
2
version.
5
shouldn't
use
book,
sell
I'm
already
confusing
myself
with
the
numbers.
A
Here
you
know
we're
able
to
start
saying:
ok,
our
is
the
application
working.
The
way
we
expect
it
to
is
there
any
problems.
Are
we
able
to
respond
to
things
you
know
when
they
click
Submit
or
they
go
to
update
an
issue?
Is
it
flowing
back
correctly
to
the
database?
Is
it
having
any
performance,
problems,
etc?
And
then,
once
all
of
the
Boston
employees
give
us
a
sign
off
that
everything
is
good
to
go,
or
at
least
a
quorum
of
them.
A
We
can
start
moving
into
more
of
an
a/b
deployment
model
or
you
can
go
hey
now
we
are
going
to
send
half
of
all
of
our
users
to
v4
and
half
of
all
of
our
users
to
v5.
We
will
make
sure
that
everything
looks
good
to
go
before
we
send
the
entire
company
over
to
it
and
then
the
entire
world
writ
large
over
to
it
yeah.
This
is
the
way
that
software
is
deployed
at
scale
today
anyway,
ever
wonder
how
all
of
those
updates
to
core
OS
images
or
the
container
analytics
machines
got
sent
out.
A
It's
just
like
that.
We
start
out
sending
the
updates
to
one
machine.
A
minute
looks
good:
let's
try,
five
machines
a
minute,
still
looking
good,
let's
change
this
over
to
sending
updates
to
one
percent
of
all
users
every
minute,
so
that's
sort
of
deployment
ideology
of
rather
than
just
fire-and-forget,
actually
do
staged
rollouts
so
that
you
can
hit
the
panic
button
in
the
event
that
there's
a
problem
and
make
sure
that
the
updates
are
not
going
to
anyone.
Anybody
who
you
don't
intend
them
to.
A
As
you
begin
to
move,
evolve
these
ideas.
It
also
means
that
you
can
take
this
a
step
further
and
before
you
even
get
to
the
point
of
deploying
those
services
for
Dan,
you
can
make
all
of
the
traffic
go
to
version
5
of
bugzilla
or
version
2
of
the
service
here
and
have
the
proxy
for
v2
of
service
be
transparently
drop.
All
of
the
responses
to
the
requests.
A
Now
we
can
actually
see
the
surface
is
gonna
handle
the
amount
of
traffic
that
we're
gonna
send
at
it.
We
have
a
high
degree
of
confidence
that
everything
is
going
to
work,
the
way
that
we
expect
it
to
and
when
we
have
all
of
these
things
in
place.
We
now
thinking
of
that
spider
web.
That
I
showed
you
the
back
at
the
beginning.
A
We
want
to
be
able
to
trace
all
the
components,
and
so
what
we
do
is
we
implement
distributed
tracing
specifically
using
Jaeger,
which
is
specific,
an
open-source
project
created
by
folks
from
uber
and
donated
to
the
CN
CF.
You
know
specifically
folks
from
the
key
ollie
and
the
Red
Hat.
Jaeger
teams
have
done
a
lot
of
interesting
work.
You
know
we
now
have
a
Jaeger
operator
which
you
can
deploy
via
operator,
hub
dot,
IO
running
eks
go
and
deploy.
A
It
run
an
open
shift,
gonna
work,
great
gke,
no
problem,
yeah,
that's
the
point
of
operator
hub
being
able
to
kind
of
pull
these
components
and
actually
be
able
to
use
them.
So
the
tracing
mechanism
here
through
Jaeger,
the
the
idea,
is
to
get
you
at
the
very
least,
a
bare
minimum
level
of
functionality
just
by
having
injections
done
at
the
proxy
level.
A
Now,
obviously,
if
your
application
components
choose
to
use
the
open
tracing
API
and
maintain
even
more
headers
or
inject
additional
information
about
the
calls,
you
can
actually
get
much
more
visibility
into
the
total
call
depth
of
a
multiple
series.
You
know
a
chain
of
services
throughout
your
network.
You
know
in
this
case
we
can
see
that
the
weakest
link
is
once
again
service
B.
We
look
at
that
actually
actual
set
of
response
times
down
at
the
bottom,
and
we
say
man
service,
a
very
snappy
service.
A
B,
though
falling
off
a
cliff
it's
over
three
times
as
slow
as
service
B.
So
when
somebody
calls
your
help
desk
and
goes,
the
website
is
slow
or
even
worse.
The
website
is
down
you're
going
I'm
just
sitting
here.
Trying
to
play
Halo
and
I
know
that
nobody
gets
that
joke.
I
knew
that
Ben
would
get
that
joke,
but
you're
now
sitting
there
with
an
actual
thing
that
you
can
begin
to
measure
and
test
and
debug,
which
is
we?
Okay?
A
Yes,
it's
slow,
but
we
see
that
service
B
is
slow,
so
team
working
on
service
B.
Once
again
you
broke
the
build.
Please
fix
it
so
I
also
mentioned
observability
here
so
here
now
you
know
this
is
Mythili
an
old
screenshot
of
key
Olly,
but
the
point
here
is
to
show
you
that
you
can
actually
get
a
graph
of
how
all
of
the
services
are
connected
together.
A
So
we
can
see
that
you
know
we
have
communications
kind
of
flowing
downwards,
going
from
oh
that
doesn't
work
at
all
yeah
so
flowing
down
through
this
graph
and
then,
when
it
gets
from
there
we
have
three
different
versions
of
the
services
going
downstream
and
that
one
specifically
we're
getting
500
errors
on.
So
we
know
exactly
that
that
new
version
of
that
service
that
v2
one
is
having
problems,
whereas
v1
and
v3
are
just
fine,
so
we
can
actually
begin
to
do
a
bisection
and
narrow
down
exactly
where
failures
are
starting
to
happen.
A
Throughout
the
environment.
We
know.
Okay,
it
did
work
in
v1.
So
this
isn't
just
a
fluke
and
it's
working
in
v3.
So
did
we
actually
fix
the
regression
there?
Did
it
just
kind
of
sidestep
it?
Did
we
rip
out
some
piece
of
code
that
just
happened
to
fix
it?
Now
you
can
actually
have
more
tooling
in
place
to
visualize
these
things
to
actually
do
further,
debugging
and
now
serverless.
A
There's
your
David
data
center.
That
is
not
serviced.
Yes,
there
are
servers.
The
idea
is
to
begin
to
abstract
the
services
where
the
servers
for
your
services
into
things
that
you
don't
care
about.
No
longer
is
this
something
where
you
care
about
some
version
of
Apache
or
some
version
of
nginx,
but
these
are
just
processes
that
are
handling
network
requests
and
the
platform
takes
care
of
all
of
the
components
under
the
hood.
A
Now,
with
that,
there's
a
little
bit
of
confusion,
because
most
folks
think
of
the
generically,
we'll
swap
out
functions
as
a
service
for
serverless
computing
and
those
are
not
the
same
thing,
because
there
is
a
lot
more
functionality
that
gets
provided
by
an
for
all
serverless
system
than
just
functions
as
a
service.
So
digging
into
this
a
bit
further,
you
know
we.
We
have
an
evolution
here
where
we
start
with
the
idea
of
we've
built
these
distributed
services,
and
then
we
realize
that
we
have
put
too
many
actual
functions
into
a
single
service.
A
So
we
break
those
down
into
micro
services,
where
we
now
build
a
single
service
to
provide
a
single
purpose,
and
then
we
still
have
to
manage
all
of
these
build
systems
and
we
still
have
to
scale
the
actual
services
directly,
which
begins
to
be
their
own
pain
in
the
butt.
So
that
is
why,
a
few
years
ago,
the
ideology
of
functions
as
a
service
were
introduced
and
why
folks
began
moving
things
over
to
those
functions
as
a
service,
you
know,
I
can
tell
you
using
one
of
the
major
cloud
providers.
A
It
became
very,
very
helpful
when
we
started
using
the
cloud
trail
mechanism
with
that
public
service.
This
is
at
core
OS
by
the
way
and
then
set
up
a
series
of
functions
running
on
that
network.
That
would
automatically
annotate
those
cloud
trail
pieces
with
additional
pieces
of
information
that
we're
not
showing
up
in
cloud
trail.
It's
a
very
simple
thing
to
do:
hey
watch
these
message:
buses!
A
You
know
it
doesn't
stop
you
from
the
developer,
commits
keys
to
github
and
then
you're
sitting
there
reacting
to
multi
hundred
thousand
dollar
bills
on
the
said
cloud
provider,
but
it
does
allow
you
to
mitigate
much
of
the
problem.
So
what
this
begins
to
look
like
is
the
reason
why
you
are
moving
in
this
direction
is
specifically
because
you're
trying
to
focus
on
the
things
that
your
business
is
good
at.
A
If
your
business
is
not
in
trying
to
provide
the
kind
of
overall
support
for
Tomcat-
and
you
really
just
want
to
focus
on
the
bits
of
code
that
you're
working
on
then
just
focus
on
the
bits
of
code
that
your
team
are
truly
experts
at
you
know,
focus
on
the
productivity
and
the
getting
things
done
and
less
on
the
degree
of
control
that
you
have
and
which
arbitrary
libraries
that
you
can
vendor
in
etc.
So
how
does
this
start
to
work?
Well,
you
have
some
event
happening.
A
In
other
words,
it's
a
lot
like
the
way
we
have
been
used
to
doing
things
within
POSIX
operating
systems.
Only
now
we
have
much
more
impressive
event
systems
and
much
more
sources
and
sinks
that
we
can
send.
Data
and
pull
data
from
rather
than
just
standard
in
standard
out
and
standard
error
means
that
we
can
now
use
message
buses
to
do
more,
intelligent
things
and
actually
maintain
those
messages
between
the
actual
services
that
we
are
trying
to
run.
A
You
know
we
can
now
have
things
like
a
process
running
and
putting
information
into
a
FIFO
and
then
multiple
other
processes
watching
that
FIFO
and
reacting
to
it
only
over
a
network,
so
this
is
kind
of
where
we're
at
you
know.
This
is
the
state
of
things
right
now.
You
know
we
are
just
getting
to
the
point
where
it
is
reaching
early
adoption
and
we
still
have
a
ways
to
go.
A
That's
okay,
because
what
this
really
means
is
that
it's
going
to
allow
us
to
kind
of
find
some
overall
balance
of
meeting
the
needs
of
what
we're
trying
to
do
without
kind
of
completely
face
planning
on
building
the
wrong
thing.
You
know
there
were
originally
some
slides
kind
of
attached
to
this.
Talking
about
the.
A
Camel
not
camel
Apache
cloud
functions
which
the
idea
there
is
Apache
cloud
functions
was
trying
to
provide
functions
as
a
service,
but
the
reality
is:
is
that
did
not
provide
everything
that
was
needed,
which
we
talked
about
in
a
second
so
well,
when
we
look
this
going
back
functions
as
a
service,
there
are
never
going
to
be
the
way
everyone
does
things,
that's
not
the
reality.
It's
not
trying
to
be
the
reality.
A
You
know
we
are
going
to
hit
an
inflection
point
where
you
decide
which
of
those
is
the
right
answer
based
on
the
need
for
control,
which
is
why
we
get
into
having
a
bell
curve
here.
You
know
we
are
going
to
have
a
situation
where,
as
organizations
choose
which
of
those
is
the
right
fit,
you
know
we
are
going
to
see
an
increase
in
adoption.
A
I,
don't
know
why
there
is
the
the
tail
off
on
there,
but
is
to
say
that
we
are
never
going
to
hit
the
same
degree
of
adoption
that
we
have
with
micro
services
and
kubernetes,
but
we
do
see
that
there
has
been
a
giant
hip
up
taking
this
over
the
past
few
years.
So
you
know
in
2014
we
kind
of
introduced,
AWS
lamda,
open
Wisc,
that's
was
the
Apache
project.
I
was
trying,
remember
I'm,
not
the
product
manager
for
surface
serverless
computing
by
the
way
that
is
William
Marketo.
Who
could
not
hear
today?
A
We,
so
we
go
through
this
increasing
number
of
options
that
we
have
available,
and
you
know
get
to
the
point
where
today
we
have
a
new
option,
called
K
native
right
there.
So
as
we
kind
of
dance
through
this,
let's
talk
a
little
bit
about
K
native,
so
k
native
goes
beyond
just
the
services
and
prescribes
both
a
build
mechanism,
the
serving
component
and
the
event
system.
A
So
the
builds
all
of
these
components
are
kind
of
being
brought
in
as
native
components
of
kubernetes
they're
done
in
a
way
that
exists
in
a
way
that
would
seem
extremely
congruent.
You're
going
to
do
builds
as
pods.
On
top
of
your
cluster
you're,
going
to
pull
those
down
directly
from
source
control,
you
are
going
to
kind
of
annotate
the
exact
locations
of
containers
and
the
endpoints
are
the
container
storage
engine
that
you
are
going
to
deploy
this
to,
for
example,
Quay,
which
I
highly
recommend
folks
check
out.
A
Similarly,
you're
going
to
now
be
begin
to
build
standard
interfaces
for
how
you
actually
serve
those.
You
know
how
you
are
going
to
be
pulling
in
your
configuration,
how
you're
going
to
actually
checkpoint
state,
etc,
you're
going
to
use
the
actual
config
Maps
within
kubernetes
and,
finally,
how
you
are
going
to
consume
cloud
events.
You
know
that's.
The
whole
purpose
here
is
to
begin
with
serving
the
kind
of
server
list
type
workloads,
but
ultimately
build
into
kind
of
having
cloud
events
available.
A
I
need
to
tell
William
to
update
that
date,
because
this
week
was
not
September
5th,
but
also
to
kind
of
work
with
the
kubernetes
ideologies
and
continue
to
move
very,
very
fast,
keep
up
the
API
levels
that
we
are
familiar
with
and
get
them
in
a
way
caught
up
to
where
they
are
at
with
kubernetes.
You
know,
frankly,
put
break
many
things
now
when
there
are
far
fewer
users,
then
breaking
everything
later
when
you've
got
a
lot
more
users.
A
No,
it's
really
crazy
idea,
but
Kay
native
is
giving
you
building
blocks,
but
Kay
native
explicitly
has
decided
that
they
do
not
want
to
be
an
entire
platform.
You
know
they
don't
want
to
provide
tooling
just
to
be
able
to
deploy
functions.
You
know
it
means
that
there
is
a
lot
of
leeway
here
to
be
able
to
fix
some
of
the
more
prescriptive
parts
of
these
processes
through
the
use
of
operators,
specifically
around
key
native.
A
You
know
deploying
things
in
standardized
ways
that
then
folks
know
that
they
can
then
kind
of
build
them
in
a
way.
That
makes
much
more
sense
and
that's
where
the
kind
of
relationship
between
K
native
and
RedHat
comes
in
here.
So
now
we
can
go
in
and
we
can
say
you
know
we
are
going
to
kind
of
use,
build
and
use
some
of
our
other
source
to
image
and
our
other
development
tools
as
the
actual
build
services.
We
know
these
well,
we
know
that
we
can
support
them.
A
We
know
that
we
can
provide
all
of
the
interfaces
that
are
needed
to
make
them
successful,
so
we
are
going
to
conform
to
the
interfaces
that
can
ativ
prescribes
and
make
them
work.
Similarly,
we
will
conform
to
those
interfaces
for
messaging
and
for
functions
as
well
and
when
we
know
that
we
are
going
to
now
have
these
functions
running
all
around
all
over
the
network,
and
we
know
that
we
need
to
actually
secure
them
and
begin
to
provide
better
routing
of
them.
Now
we
have
some
peanut
butter
with
our
jelly.
A
We
can
begin
to
utilize
is
do
and
the
service
mesh
kind
of
as
underpinnings
to
make
sure
that
all
of
the
inter-service
communications
can
truly
exist
in
a
zero
trust
manner,
and
this
begins
to
get
more
complex.
As
we
add
more
services
with
more
events-
and
you
know
we
trigger,
builds
and
the
snake
eats
its
own
tail
and
we
have
a
giant
or
a
Bora
here,
but
it
makes
it
much
more
sense
to
me
if
we
talk
about
actual
use
cases.
A
So
let's
say
that
you
know
we
have
openshift
where
we
have
kubernetes
running,
we
receive
a
scale
up
event
and
we
go
through
and
we
actually
go
through
and
verify
a
quota.
We
update
any
kind
of
billing
things
that
need
to
happen.
We
kind
of
update
a
specific
service
broker
and
send
a
message
out
to
our
billing
service
and
then
in
the
event
that
there's
a
failure.
A
You
know
we
notify
the
on
call
person
and
then
we,
you
know
fire
off
a
function
that
opens
a
specific
your
a
ticket,
possibly
including
a
trace
or
some
other
information.
And
then
you
know
actually
fire
off
new
pods
to
react
to
that,
so
that
now
we
have
automated
recovery
of
the
services
while
at
the
same
time,
knowing
that
all
of
the
telemetry
that
we
needed
has
been
captured
and
put
into
that
JIRA
ticket.
A
A
We
see
three
weeks
earlier
that
this
vibration
sensor
started
going
off
now,
rather
than
having
your
plane's
be
offline
because
the
part
finally
broke.
You
know
with
a
high
degree
of
certainty
which
parts
you
proactively
inspect
and
replace
you
can
actually
go
through
and
have
those
parts
drop
shipped
to
the
airport
before
the
plane
even
lands.
A
A
Thanks
man
thanks
Jeff,
always
there
for
me,
but
all
of
these
things
are
great
for
building
chat,
BOTS
and
data
transformations
packet,
processing
for
voice
and-
and
things
like
that,
so
think
about
this.
You
know
just
because
the
technology's
there
doesn't
mean
you
have
to
use
it,
but
just
because
you
aren't
using
the
technology
also
doesn't
mean
it's
not
there
and
won't
continue
to
evolve
and
as
technologists,
it's
up
to
us
to
be
aware
of.
What's
possible,
you
know
the
times
when
you
don't
want
to
use.
A
This
are
super
long
running
tasks
can't
be
check,
pointed
or
you
know
things
that
can't
deal
with.
You
know
a
delay
because
you
actually
have
to
scale
them
up
and
where
are
some
of
these
things
going
in
the
future?
Well,
this
is
easiest
for
me
to
speak
to
when
I
talk
to
the
service
mass
components,
since
that's
the
thing
that
I'm
the
PM
of-
and
that
is
that
we
are
gonna
GA
with
that
in
May,
it's
exciting
GA
of
service
mesh
every
you
know
all
kinds
of
sales.
A
Folks
and
folks
keep
asking
me:
we
are
going
to
have
the
one
day
release
of
service
mesh,
which
is
really
the
one
dot
one
release
of
ISTE,
oh
and
then
the
thing
that's
exciting
to
me
is
that
we
are
kind
of
taking
our
process
and
drinking
our
own
champagne.
We
are
trying
to
build
our
update
mechanism
as
one
of
the
first
ones
that
is
doing
an
add-on
component
that
is
critical
to
users
of
OpenShift,
but
receives
operator
lifecycle
management
based
updates.
A
You
know
kind
of
building
in
hooks
into
the
sto
component
of
mixer,
to
kind
of
make
sure
that
you
know
we
can
minimize
some
of
the
duplicated
components
and
make
sure
that
you
take
the
kind
of
routing
capabilities
that
are
available
in
sto
and
continue
to
build
on
those
with
more
types
of
policy
enforcement,
etc,
and
with
that
I'll
yield
the
floor.
Thank
you
very
much.