►
From YouTube: State of Serverless: Knative, Istio, Kiali & OpenShift
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
I
will
start
talking
a
little
bit
about
service
smash,
even
though
we
are
talking
about
service
I.
Think
the
service
match
plays
an
important
part
here
in
the
architecture
as
well,
and
to
give
a
very
brief
introduction
talking
briefly
about
the
history
here
in
2014
I.
Think
there
was
this
explosion
of
you
have
to
build
micro
services.
Now.
A
Micro
services
are
a
great
way
to
build
distributed
systems
and
whatnot,
but
quite
often
those
micro
services
started
to
grow
in
size
and
not
become
as
micro
anymore,
and
you
would
start
to
add
all
these
traits
all
these
capacities
to
those
services
and
inflating
a
little
bit
the
size
of
your
application
of
your
of
your
source
code
and
then
of
your
jars
as
well.
After
abusing,
for
example,
Java
application.
So
we
had
to
build
tracing
circuit,
breaker,
routing
and
all
this
cross-cutting
concerns
as
part
of
your
service,
so
fast
forward,
probably
2018
around
that
time.
A
The
the
idea
of
creating
a
service
mesh
came
up,
and
there
was
a
bunch
of
different
projects
coming
up
to
implement
that.
That
idea
in
the
idea
is
that
you
can
actually
add
a
layer
to
abstract.
Most
of
those
traits
and
concentrate
those
those
that
implementation
as
part
of
the
infrastructure
we
at
openshift,
we
have
the
open
ship
service
mesh,
which
is
built
on
top
of
easter,
but
it's
not
only
using
the
so.
You
have
a
bunch
of
different
components
as
well.
For
example,
you
have
Jaeger,
there
is
Prometheus
for
observability
and
monitoring.
A
You
have
Graf
Erna
as
well
and
empty
alley,
so
the
the
packaging
of
all
these
different
projects
and
all
these
different
technologies
is
what
we
call
the
open
shift
service
mash
and
the
main
go
here
is
again
like
I
said:
the
goal
of
the
service
mesh
is
to
abstract
most
of
those
cross-cutting
traits
and
concerns
into
objects
that
are
part
of
your
infrastructure
rights.
Well,
they
are
pretty
much
now
part
of
Cuban
Aires
and
you
can
then
consume
them
from
all
those
different
services.
It
doesn't
matter
the
programming
language
you
are
using.
A
We
have
a
bunch
of
live
demos
and
tutorials
available
in
that
URL
I'll
leave
the
URL
there
there
for
a
while
and
then
at
the
presentation
as
well.
You
can
access
them
and
play
a
little
bit
with
service
mesh
and
see
how
that
could
apply
to
your
use
case.
To
summarize,
and
to
tell
a
little
bit
about
the
road
map
service,
machine
openshift
is
coming
to
GA
in
4.1
and
if
I
would
summarize
without
too
much
details
about
the
technology
right,
the
main
customer
benefits
the
benefits
you
see
in
your
applications.
A
Are
these
right,
so
reduced
need
for
developers
to
have
operational
knowledge.
So
you
are
again
abstracting
out
that,
through
the
platform,
you
are
providing
a
framework
to
do
service
discovery,
observability
and
distributed
racing
again
without
adding
anything
to
your
cold.
That's
all
abstract!
Through
the
platform.
You
can
then
inject
all
these
different
rules
and
these
different
trades
using
a
policy
driven
mechanism
and
through
those
rules
you
can
apply,
you
can
execute
traffic
shaping
you
can
do
routing.
You
can
do
a
be
testing
cannery
deployments
and
perform
some
chaos
engineering.
A
So
more
links
there
on
how
to
get
started,
how
to
install
and
to
do
the
demos
but,
as
I
said,
have
a
lot
to
cover
here
now,
I'll
dive
a
little
bit
into
serverless,
looking
at
the
the
landscape
available
in
CN
CF,
and
we
were
right
there
at
the
servlets
practitioner
event.
They
compile
the
survivalist
landscape
into
this
nice
chart
and
you
can
see
that
the
servlet
space
is
I,
mean
evolving
and
growing
in
the
number
of
tools,
the
number
of
technologies
that
they
have
across
the
board.
So
you
have
tools
you
have
frameworks.
A
You
have
platforms
some
of
those
platforms.
They
can
only
work
in
a
hosted
environment,
so
they
are
available
just
as
a
proprietary
implementation
available
in
a
specific
cloud
provider,
but
others
they
are
installable
right.
You
can
actually
install
them
anywhere.
You
want
most
of
those
can
be
installed
on
top
of
kubernetes,
for
example,
and
then,
because
because
of
the
portability
that
kubernetes
gives
to
you,
you
can
run
that
pretty
much
anywhere
on
any
platform
you
want
on
Prem
or
on
different
cloud
providers.
A
The
other
thing
that,
whenever
talking
about
service
that
there
is
this
misconception,
that
service
is
all
about
functions
right
and
I.
Think
one
of
the
analysts
that
we
interacted
with
I
think
they.
He
came
up
with
this
phrase
that
I
like
a
lot.
It's
pretty
much
saying
that
so
function
as
a
service
is
serve
less
in
the
same
way
that
a
square
is
a
rectangle
right.
A
It's
it's
not
really
the
only
way
to
do
service,
it's
a
specialization
of
service
that
is
specific
to
certain
use
cases,
but
there
is
more
to
serve
less
than
just
functions.
You
can
do
serve
less
to
a
bunch
of
different
workloads.
The
same
thing
would
apply
to
micro
services
as
well.
They
are
nice.
They
are
important
to
promote
separation
of
concerns
to
build
a
very
nice
architecture
with
distributed
systems
that
are
very
focused
on
the
business
need,
but
again
they
can
be
served
less
or
not.
A
There
is
nothing
really
specific
to
service
in
just
doing
micro
services
right
in
containers
as
well,
although
it's
it's
one
very
important
characteristic
of
this
kind
of
workloads,
because
it
offers
this
this
nice
standardized
package
to
promote
portability.
So
as
long
as
you
have
a
container
again,
you
can
run
that
pretty
much
anywhere.
You
have
a
container
platform
again.
Those
containers
they're
not
necessarily
anything's
that
they're
not
providing
anything
specific
to
to
just
serve
lists.
A
This
is
just
the
format,
your
packaging,
to
make
sure
you
have
this
kind
of
interoperability
to
run
the
workload
whenever
you
want,
whatever
you
want,
so,
if
I
would
summarize
and
and
try
to
to
to
position
servlet
in
a
different
way
that
you
can
think
about
it,
I
would
say
think
about
it
as
a
trait
that
again,
you
can
implement
and
you
can
apply
to
all
sorts
of
workloads.
It's
also
a
spectrum,
a
continuum
there's
a
great
talk
by
Ben
qahal.
A
That
talked
a
little
bit
more
about
that,
but
think
about
how
how
serval,
as
you
are
again
for
certain
workloads
for
certain
things.
You
can
be
more
service
if
you
can
give
away
control
because
you
want
to
gain
in
velocity,
but
for
other
things
you
actually
prefer
to
keep
that
control
with
you,
and
you
want
to
do
certain
certain
things
in
certain
ways.
That,
then,
is
going
to
give
you
the
responsibility
to
do
that.
A
So,
if
we
look
at
the
the
different
kinds
of
workloads,
you
have
micro
services,
you
have
functions,
you
have
applications
and
that
that
layer,
I'm
calling
here
the
application
framework
layer.
You
have
all
different
ways
that
you
can
write
applications,
but
then
at
the
bottom,
in
the
end,
those
applications
they
get
compiled,
they
get
packaged
as
a
container
and
once
you
do
that
it
becomes
infrastructure
and
you
can
run
that
pretty
much
anywhere.
So
this
is
where
the
technologies
and
the
different
projects
would
come
into
play.
A
A
Looking
at
a
native,
you
have
the
three
main
components
available:
they're
build
serving
in
eventing
would
build
is
a
little
tilted
there,
and
the
reason
for
that
is
pretty
much
because
a
lot
of
the
facilities
available
in
K
native
build
are
getting
ported
to
a
new
project
called
Tecton
right
as
part
of
that
evolution
of
the
project,
I
was
pretty
much
released.
Ten
months
ago,
we
started
with
build
as
a
module
to
pretty
much
get
a
container
out
of
a
source
code
out
of
a
project
very
quickly.
A
We
we
saw
the
need
for
something
a
little
bit
more
complex,
that
we
could
actually
orchestrate
different
steps
of
that
build
process.
So
then
Kay
native
pipeline
started,
but
then
very
quickly.
We
also
realized
that
the
scope
of
that
would
be
beyond
service.
Beyond
what
Kay
native
was
doing,
and
then
we
ported
that
to
a
standalone
project
called
Tecton
in
key
native,
then,
specifically,
you
have
serving
in
eventing
serving
then
is
responsible
for
the
output,
a
scaling
part
based
on
events.
A
So
it's
going
to
allow
your
containers
to
auto
scale
based
on
the
number
of
events
you
received,
but
then
it's
also
going
to
scale
down
to
zero.
Whenever
you
don't
have
any
more
requests
ready.
It
also
offers
the
integration
with
the
service
mash.
Today
with
East
you
and
the
other
important
module.
There
is
a
venting
which
offers
this
common
infrastructure
to
connect
different,
even
sources
to
those
workloads
and
also
to
consume
those
events
that
we
is
going
to
delete
that
are
going
to
stimulate
the
applications
available
in
serving
so
now.
C
So
that's
a
lot
of
nice
words
towards
what
serval
is
is
what
we
think
serving
is
this,
and
one
of
the
most
important
bits
I
want
to
stress
is
server.
This
is
more
than
functions
and
server.
This
is
not
only
functions
because
when
function
as
a
service
arrived,
it
looked
like
like
two
three
years
ago,
I
don't
know
it
looked
like
now.
C
So
let
me
actually
exit
that,
and
this
will
be
awkward
because
I
have
to
hold
the
microphone,
but
please
bear
with
me
so
installing
K
native
first,
you
can
do
that
via
the
operator
hub,
which
is
part
of
the
OTP
for
installation
by
default.
Just
go
to
catalog
operator
hub
and
now
I
killed
my
filter
filter
by
that,
and
then
you
have
the
kanay
disturbing
operator
up
here:
click
on
it,
you
click
install,
you
say
subscribe
and
there
you
have
it.
C
You
end
up
with
the
system,
basically,
as
I
said,
with
creative,
serving
install
and
unto
the
next
demo,
which
is,
as
I
said,
porting
over
a
very
old
school
application
and
making
it
run
server
less.
So
the
old
school
application
is
a
PHP
guest
or
actually
I
should,
and
last
but
not
least,
we
drop
the
container
name
because
Canada
doesn't
like
container
names
and
it
will
name
it
itself
and
that's
literally
it
we
save
that
I
saved
and
apply
that.
C
And
it
worked,
we
see
a
pot
coming
up
up
there
again
and
now
I
can
get
the
URL
of
that
connait
of
service
service.
There
you
go
pull
this
up.
We
go
to
that.
We
see
the
same
textbook
now.
If
we
wait
a
few
seconds
which
is
actually
I,
think
60
to
90
seconds.
We
will
see
that
that
this
deployment
will
stay
down
and
I
tell
you
more
a
bit,
but
about
that,
but
keep
an
eye
on
on
the
display
there
seeing
its
go
down.
C
So
what
we've
seen
now
is
that
we've
made
the
application
that
I
talked
about
that
old
old-school.
Php
nginx
thing,
which
is
just
packaged
in
a
container
and
container,
is
the
the
common
denominator
here.
It's
packaged
in
a
container,
so
it
runs
on
creative
service,
a
kinetic
serving
which
makes
it
service
if
we
would
still
live
in
the
2000s.
A
Again,
like
like
we
were
presenting
before
again,
it's
not
only
about
functions.
There's
this
idea
of
migrating
current
applications
to
to
service
is
is
super
important
because,
again,
quite
often
when
we
talk
to
the
customers
that
we
have
they
they
are
concerned
sometimes
about.
Oh,
do
I
have
to
rewrite
all
the
applications
in
order
to
leverage
the
benefits
of
service
right
and
as
as
Markos
just
demonstrated
again.
That's
not
really
the
case.
A
You
can
pretty
much
save
the
mo
file
that
you
have
today
to
deploy
that
that
application
delete
some
code,
which
is
one
of
the
best
things
to
do,
and
you're
you're
good
to
go
you're
pretty
much
then
running
that
same
application
as
service.
As
you
can
see,
the
pods
now
terminated
it
scaled
down
to
zero,
because
again
there
is
no
traffic
right
for
that
particular
workload.
A
Going
back
to
the
slides
to
wrap
it
up
so
again
serve
lists
and
K
native
is
coming
to
openshift
it's
available
now
as
a
developer
preview
technology.
So
you
can
try
it
out,
but
it's
going
to
be
tech
review
in
4.2
and
we
have
goes
to
take
it
to
to
GA
by
the
end
of
the
year.
Of
course,
this
is
an
upstream
project
as
well,
and
it
depends
a
lot
on
this.
The
ability
of
those
AP
is
and
how
the
community
is
evolving
upstream.
The
main
benefits.
A
To
summarize
again,
it's
it's
very
familiar
to
kubernetes
users.
Again,
if
you
are
already
using
cuban
areas
using
openshift,
we
don't
have
to
learn
a
complete
new
stack
and
install
a
bunch
of
other
different
technologies.
On
top
of
your
kubernetes.
To
do
that,
apply
a
cup
of
CR
DS
extend
the
current
kubernetes.
You
have
and
have
service
capabilities
it
can
scale
to
0.
Just
like
you
saw
it
can
auto
a
scale
to
n
based
on
the
demands
you
might
have.
It's
not
only
for
functions
again.
A
Applications
functions
pretty
much
any
any
container
workload,
and
we
did
not
demonstrate
it
here
today,
cuz
again
short
on
time,
but
there
is
also
a
powerful
eventing
module
that
can
be
used
to
trigger
those
applications
to
trigger
those
containers
from
from
Kafka
chem.
Okay,
fuse
to
github
and
a
bunch
of
different,
even
sources
that
can
be
used
and,
of
course,
based
on
an
open
source
project,
no
vendor
lock-in.
Again
there
is
nothing
proprietary
here
for
what
we
just
showed
very
briefly
as
well.
A
I'll
mention
about
the
project
we
announced
with
Microsoft
called
kada,
so
kada
allows
you
to
use
Azure
functions
on
top
of
OpenShift
and
we
are
also
integrating
kada
with
K
native
and
the
idea
for
doing
that.
It's
pretty
much
reuse,
the
event
sources
and
the
powerful
event
image
that
we
have
in
Canada
with
kada,
but
Canada
also
allows
and
enables,
as
recuse
and
as
your
service
bus,
to
also
trigger
as
your
functions.
Also
because
we're
talking
about
every
function
here,
you
can
use
the
same
CLI
the
same
tooling.
A
You
are
used
to
do
when
you
are
deploying
esra
function,
so
the
function
I
and
the
vs
code
plugins
to
create
functions
and
deploy
those
targeting
OpenShift.
There
is
more
about
that
in
the
link.
Here
again,
we
just
announced
that
at
summit
last
week
or
two
weeks
ago,
and
that's
pretty
much
it.
Thank
you
very
much.