►
From YouTube: What's New in OpenShift Serverless Paul Morie William Markito OpenShift Commons Gathering KubeConNA
Description
What's New in OpenShift Serverless
Paul Morie (Red Hat) | William Markito Oliveira (Red Hat)
OpenShift Commons Gathering KubeCon NA
November 17 2020
In this session, we will give you an overview of the latest developments of OpenShift Serverless, the Knative-based offering, with a quick recap about what the project is all about and updates about Eventing features, now heading to General Availability, and our implementation of Functions. Beyond that, you will hear about updates to the governance process of Knative, including the new steering charter, trademark committee, and much more.
466761 What's new in OpenShift Serverless
A
A
Nice,
so,
let's
dive
in,
but
before
we
go
too
deep
here,
let's
review
some
of
the
first
principles
that
we
really
use
to
to
guide
everything
we
do
as
far
as
a
product
and
in
red
hat,
especially
on
the
cloud
team.
So
of
course
we
really
focus
on
the
openness
and
working
with
communities
and
open
standards
and
driving
development
with
those
communities.
You're
going
to
hear
some
updates
about
that
regarding
recent
developments
in
the
k-native
community
and
then
on
the
right
side,
you
have,
of
course,
the
hybrid
aspect,
which
is
really
key
for
everything.
A
We
are
doing
making
sure
that
the
experience
that
we
deliver
for
those
projects
as
products,
they
have
a
great
experience
on
public
and
on
private
cloud
and,
of
course,
work
really
well
anywhere.
You
want
to
run
openshift
today
we're
going
to
focus
on
k
native
for
the
most
part,
like
I
said,
and
just
to
quickly
recap.
A
What
k
native
is
again
k
native
comes
with
three
main
modules
so
serving
which
is
really
focused
on
request
written
compute.
So
it's
a
way
for
you
to
scale
applications
up
and
down
even
to
zero,
based
on
demand
based
on
number
of
requests.
Then
you
have
another
module
called
eventing,
which
is
really
focused
on
the
infrastructure
to
send
and
receive
events
to
again
start
those
applications.
A
A
So
one
is
the
ability
to
write
functions
as
well,
which
is
not
something
that
you
have
available
in
k
native
percy
again,
something
that
we
are
doing
with
openshift
serverless
and
we
also
package
all
that
with
an
operator
that
allows
you
to
install,
upgrade
and
and
configure
really
k
native
inside
openshift,
so
that
you
can
leverage
other
services
from
the
platform
as
well
things
like
logging,
mentoring,
monitoring,
all
of
those
other
services.
B
Yeah
so
let's
talk
about
a
a
subject-
that's
probably
important,
to
some
extent
at
least
almost
everybody
in
this
room,
which
is
open
governance.
If
you
follow
the
k-native
project,
you're,
probably
aware
that
open
governance
within
k-native
is
something
that
we've
been
working
on
for
quite
a
while.
B
B
We
now
have
a
toc
that
is
elected
and
composed
of
folks
from
ibm
and
red
hat,
vmware
and
google,
and
since
that
time,
earlier
this
year,
where
we
moved
to
that
elected
model
for
the
toc,
we
have
been
working
toward
adopting
a
similar
model
in
the
steering
committee.
And
actually
I
have
good
news,
because
I'm
here
to
talk
about
the
specifics
of
that
new
model
that
we've
adopted
in
a
new
charter.
B
So
for
context
before
we
adopted
this
charter,
the
steering
committee
had
been
in
a
bootstrap
phase
for
quite
a
while
since
early
2019,
and
the
model
was
basically
that
there
were
appointed
representatives
from
some
specific
companies
that
were
that
were
most
active
at
the
time
when
the
steering
committee
had
been
formed.
So
we
had
some
representatives
from
google
from
ibm
from
red
hat
and
vmware.
B
So
additionally,
there
were
no
rules
or
guidance
for
how
new
members
would
be
added
to
steering
and
how
we
would
maintain
that
committee
over
time,
and
one
of
the
things
that
we
heard
from
folks
that
were
interested
in
engaging
in
the
project
was
that
the
lack
of
clarity
around
how
you
would
kind
of
life
cycle
this
committee
in
a
way
sort
of
encumbered
the
project,
because
there
wasn't
a
clear
way
to
develop
influence
at
the
level
of
steering.
So
in
the
new
charter,
we
we
adopted
some
changes
that
I
think
address
those
concerns.
B
B
So
we
have
moved
to
a
new
model
where
steering
will
be
elected
and
where
no
vendor
can
hold
a
majority
of
seats
on
the
steering
committee,
and
this
may
sound
familiar
if
you're.
If
you
follow
kubernetes
governance.
In
fact,
the
kubernetes
governance
scheme
was
one
of
the
things
that
we
looked
at
a
lot
for
inspiration
as
we
came
to
arrive
at
this
new
model.
B
So
in
this
first
year
and
and
we've
literally
just
in
the
last
couple
weeks
adopted
this
new
charter
so
we're
this
is
sort
of
a
work
in
progress
to
execute
it
now.
But
we
will
have
nominations
open
soon
in
elections
later
this
year
to
begin
to
cycle
toward
that
elected
community
based
model,
we'll
have
two
seats
up
this
year
and
there
will
be
at
least
three
seats
up
for
election
next
year
and
that
is
sort
of
the
tldr
of
the
steering
changes.
B
One
of
the
things
that
came
up
in
our
community
discussions
around
this
and
there
were
a
lot
of
great
discussions
within
the
community.
In
fact,
you
can
go
and
watch
some
of
the
videos
online
if
you
would
like,
but
one
of
the
things
that
came
up
as
we
kind
of
worked
through
adopting
this
model
in
the
community
was
that
the
question
around
what
is
in
scope
for
the
k-native
trademark
is
maybe
a
better
fit
for
a
committee
with
a
slightly
different
organizational
scheme,
since
trademark
is
probably
most
important
to
vendors.
B
The
seats
on
the
ktc
are
held
by
vendors
and
members
of
this
committee
represent
their
employers,
and
currently
this
committee
looks
sort
of
similar
to
the
bootstrap
steering
in
that
we
have
google
represented
as
the
owner
of
the
project
we
have
ibm
and
red
hat
represented
in
vmware
represented
going
forward.
The
ktc
will
consider
adding
new
members
every
year.
So
this
is
something
else
that
I
wanted
to
make
sure
that
we
touched
on
in
this
update,
because
it
helps
to
address
that
concern
of
clarity
around.
B
How
can
you
as
a
individual
or
as
someone
making
choices
about
where
you
spend
your
developers
open
source
time,
engage
with
the
canadian
project
and
help
to
develop
influence
so
going
back
to
the
the
ktc
when
they
look
at
adding
new
members,
they'll
consider
contributions
that
any
particular
vendor
has
made
and
there's
also
a
process
that
allows
vendors
to
articulate,
maybe
some
contributions
that
they
made
that
are
harder
to
count,
because
not
every
open
source
contribution
is
easy
to
quantify.
Right
like
it's.
B
B
So,
in
the
event
that
that
that
you're
thinking
about
engaging
with
the
project
just
know
that,
like
when
we
consider
membership
in
the
trademark
committee,
that
any
contribution
counts
and
can
be
counted
and
there's
a
process
for
folks
that,
like
if
you,
if
you
feel
like,
maybe
you
did
more
of
things
that
didn't
touch
github,
then
things
that
did
you
can
articulate
that
with
the
exception
process.
So
key
things,
oh,
go
ahead,
william!
Yes,.
B
Absolutely
yeah
and
since
we
know
that
it's
it's
one
hard
to
count
some
types
of
contributions,
and
it's
also
hard
to
even
foresee
all
types
of
contributions
that,
in
the
event
that
there's
there's
something
you
want
to
make
sure
the
trademark
committee
considers
you
can
you
can
write
up
a
blurb
articulating
what
you
feel
your
organizational
contributions
are.
B
So
the
the
key
takeaways
I
want
to
leave
folks
with
around
governance
is
that
we
have
significantly
improved
clarity
around
the
governance
in
in
these
two
different
dimensions:
around
the
the
steering
committee
and
trademark
committee's
composition
of
those
committees,
elections
for
steering
and
who
can
serve
clarity
around
how
vendors
can
get
a
seat
on
the
trademark
committee
and,
of
course,
great
community
participation
during
this
process.
B
I
I
really
want
to
just
take
a
a
second
here
and
thank
everybody
that
participated
both
my
my
colleagues
at
red
hat
and
my
open
source
colleagues
in
the
community.
I
I'm
really
happy
to
be
able
to
give
you
all
this
update
today
and
I
I
think
that
community
participation
was
really
key
to
making
that
happen.
So
thanks,
everybody
that
participated
and
if
you're
really
interested
in
how
this
particular
sausage
was
made,
you
can
go
and
watch
the
videos
online.
If
you
search
for
k-native
steering
committee.
A
Nice
yeah
definitely
yeah
so
elections
and
everything
going
on
this
year,
not
only
in
u.s,
but
I
guess
then
also
in
open
source
projects.
That's
awesome,
great!
Yes,
so
let's
do
a
quick
recap
on
on
serving
now
and
again.
The
idea
here
is
really
just
to
to
do
a
brief
recap
and
and
point
out
here,
like
we'll
talk
a
little
bit
about
some
of
the
main
components
that
are
part
of
serving
starting
with
service.
So
what's
what's
a
service
paul?
What's
a
k-native
service.
B
That's
a
good
question
and
don't
let
the
name
fool
you
it's
different
from
the
kubernetes
resource
called
service.
It's
not
the
name.
I
would
have
personally
chosen.
In
fact,
there
was
a
long
drawn
out
process
of
arriving
at
this
name
in
the
community
that
the
project
had
at
the
time.
B
But
when
you
think
of
what
a
k-native
service
is,
it
is
basically
a
very
high-level
resource
that
is
similar
in
certain
ways
to
a
kubernetes
deployment,
in
the
sense
that
it
generates
these
other
resources
that
actually
go
to
do
the
work.
So
configuration
is
basically
like
the
highest
level
container
that
we
have
no
pun
intended
from
an
api
standpoint
that
encapsulates
the
configuration
for
the
the
serverless
service
that
you're
deploying
and
the
routes
that
bring
traffic
into
it.
B
So
moving
down
into
what
those
mean
there's
a
resource
called
configuration
and
its
job
is
to
generate
immutable
snapshots
of
an
application
called
revisions
that
the
routes
that
are
also
specified
in
the
service
bring
traffic
into.
So
you
can
think
about
that
service
as
sort
of
a
serverless
flavor
of
deployment,
where
you
can
specify
both
the
things
that
you'd
expect
to
specify.
In
like
a
normal
kubernetes
deployment,
as
well
as
information
about
how
traffic
should
go
into
those
revisions
that
are
created
and
how
the
traffic
should
be
split,.
A
Yeah,
what
I
what
I
really
like
about
about
revisions-
and
this
idea
of
snapshot-
is
really
this
ability
to
enforce
some
best
practices.
As
far
as
again,
every
time
you
push
a
new
change
to
either
configuration
or
code
to
your
application
that
snapshot's
going
to
be
generated,
and
that
allows
you,
for
example,
to
do
things
like.
Maybe
you
want
to
generate
a
preview
url
for
that
particular
version
of
the
application,
but
you
don't
want
all
the
production
traffic
to
go
to
that
particular
version
of
your
application
right.
A
Maybe
you
want
to
do
what
they
call
a
dark
launch,
so
it's
just
people
that
know
the
url,
so
it's
going
to
automatically
be
generated,
a
random
url
for
that
revision.
You
can
influence
that,
of
course,
if
you
want,
but
that's
like
one
of
the
patterns
that
I
see
that
is
really
useful
for
for
revisions
here
and
then
another
one,
of
course,
is
the
usual
a
b
split
or
canary
deployments
and
whatnot.
But
this
idea
of
doing
live
previews
of
a
code
that
again,
you
may
not
want
all
the
production
traffic
to
consume.
A
Is
that
that
traffic
split
it
can
happen
like
all
this
functionality
is
provided
out
of
the
box
just
with
k
native,
but
you
can
also
extend
that
with
with
a
service
mesh
right.
You
can
use
a
service
mesh
if
you
want
as
well,
but
that's
an
option.
It's
not
something
that
we
are
imposing
in
order
to
perform
that
traffic
split
right,
paul.
B
Yep,
yep
and-
and
I
agree
that
the
ability
to
just
get
traffic
splitting
out
of
the
box
is
really
really
powerful.
If
I
think
about
my
you
know
previous
industry
experience
around,
we
want
to
test
some
alpha
feature
and
we
want
to
send
maybe
one
percent
of
traffic
to
that
and
just
see
what
happens
I
have.
I
have
spent
time
writing
infrastructure
to
do
that.
So
getting
it
out
of
the
box
is
pretty
wild.
Pretty
cool.
A
Yep
yep
one
one
thing
also
that
we
are
working
internally
as
far
as
experiences
in
openshift
is
to
make
sure
that
there
is
an
easier
path
for
you
to
generate
those
snapshots.
Those
revisions
using
pipelines
right,
for
example,
tacton.
A
So
now
you
have
a
ci
pipeline
that
can
from
a
git
project,
build
and
deploy
a
new
version
of
your
application
and
automatically
generate
a
preview
url
for
your
app,
and
maybe
you
want
to
post
that
back,
for
example,
in
your
pr,
so
that
the
engineering
team,
or
maybe
your
designers,
they
can
see
the
layout
they
can
interact
with
it
and
then
only
after
that,
you
eventually
promote
that
application
to
abroad.
Right.
A
It's
really
really
interesting
cool,
so
that's
essentially
serving
in
a
nutshell
and
then,
of
course,
one
thing
that
we
did
not
touch
as
far
as
apis.
But
that's
just
inherent
of
this
module
is
the
ability
to
scale
up
and
down
based
on
requests
right.
So
that's,
what's
really
triggering
this
application
here
and
those
requests
again,
they
can
be
http,
of
course,
but
they
can
also
be
cloud
events
right.
A
They
will
be
wrapped
in
http
request,
but
the
payload
itself
can
be
a
cloud
event
which
really
leads
us
into
the
next
section
here
about
eventing
waiting
for
my
slide
to
reload.
A
There
you
go,
which
eventing
is,
is
really
the
module
that
we
want
to
talk
more
about
today,
because
again
serving
was
already
considered,
va
in
openshift
serverless
since
march,
I
believe-
and
now
we
are
finally
taking
eventing
as
well
and
considering
inventing
a
ga
module
right
and
with
the
venting,
you
essentially
have
the
ability
to
connect
external
systems
to
your
application
right.
So
maybe
you
want
to
cover
some
of
those
apis.
Then.
B
Absolutely
yeah
so,
let's
start
with
sort
of
the
earlier
generation
apis
which
are
single
tenant,
so
the
in
in
this
earlier
regime
that
was
developed
early
in
the
project's
life
cycle
or
early
in
the
project's
lifetime.
There
are
event
sources
and
you
can
think
of
these
as
the
on-ramps
for
events
to
come
into
the
system.
There's
a
variety
of
these
for
for
different
cloud
services
for
different
middleware
brokers.
B
I
probably
shouldn't
have
used
the
term
broker
or
maybe
we
shouldn't
have
used
the
term
broker
in
eventing.
But
when,
when
I
say
broker
here
I
mean
like
mq
broker
type
thing
or
kafka,
so
there
are
a
number
of
different
event
sources
that
are
the
on-ramps
for
events
to
come
into
the
system.
B
And
of
course
you
can
build
your
own
if
the
exact
one
that
you
want
doesn't
exist
inside
the
system,
using
that,
like
transportation
analogy,
there's
something
called
a
channel
and
you
can
think
of
this
as
a
channel
is
the
road
that
an
event
that's
come
on
to
that
on-ramp
travels
through
the
system
using,
and
these
are
basically
forwarding
and
eventing
persistence
layer.
There's
an
in-memory
implementation,
that's
maybe
more
suitable
for
development,
but
then
there
are
also
flavors,
backed
by
different,
durable
stores
like
active,
mq
or
kafka.
B
B
More
recently,
we've
got
what
you
might
call
eventing
mesh
apis
and
the
central
thing
there
that
we'll
talk
about
is
the
broker.
The
broker
is
an
entity
that
can
send
and
receive
messages
from
multiple
sources
and
subscribers
brokers
work
with
triggers,
where
the
trigger
sits
between
the
the
broker
and
the
receiver
of
the
event
and
implements
filtering.
B
So
if
you
don't
want
every
event,
that's
going
into
a
broker
to
be
received
by
a
particular
receiver,
you
can
use
a
trigger
to
filter
those
events
out
and
then
there
are
some
additional
higher
level
apis
and
we
we
we're
thinking
here
about
patterns
of
enterprise,
integration,
there's
a
sequence
that
allows
you
to
wire
an
ordered
series
of
subscribers
and
sort
of
generates
the
channel
and
subscriber
setup
that
you
need
to
pass
from
a
to
b
to
c
to
d
and
then
there's
another
variant
called
a
parallel
that
allows
you
to
wire
a
fan
out
to
multiple
subscribers
and
associate
filters
with
all
of
those.
A
Nice
so
kind
of
drawing
a
diagram
with
with
those
apis.
You
would
get
this
diagram
here
where,
again
you
see
at
the
top
the
id
of
the
broker
again
you're,
saying
you're,
seeing
all
the
different
sources
and
the
multiple
event
types
going
so
two
one
and
three
there.
It
should
be
a
three,
but
then
you
see
that
the
broker
then
is
doing
the
filtering
and
say
hey
these
types
of
events,
I'm
sending
to
this
application,
that's
represented
by
a
sync
and
then
this
other
type
of
event.
A
I'm
filtering
right
and
sending
to
a
different
site
that
this
this
built-in
routing
and
filtering
mechanism
is
again
it's
really
powerful
and
it
can
implement.
It
can
be
used
to
implement
many
of
these
eips
and
then
a
small
copyrighted
channel
again
just
adding
a
diagram
to
that.
The
idea
here
is
that
you
essentially
could
have
multiple
sources
sending
different
event
types,
but
that
channel
now
will
carry
that
event
and
send
all
of
those
events
to
the
subscribers
right.
A
That's
where
the
subscription
comes
into
play
and
your
application
will
be
a
sync
there,
but
keep
in
mind
again
as
paul
said
that
the
sync
could
be
a
canadian
service.
It
could
be
a
uri
right,
it
could
be
just
a
kubernetes
deployment
as
well,
so
this
is
what's
coming
as
far
as
ga
and
we're
going
to
see
a
little
demo
of
that
as
well
towards
the
end.
But
there's
another
thing
that
openshift
serverless
also
does
to
to
k-native.
A
But
what
we
want
to
make
sure
that
people
understand
and
one
of
the
key
differences
between
just
serverless
containers
and
serverless
functions
is
really
what
goes
inside
the
container
that
your
application
will
be
running
right.
So,
whenever
you're
doing
a
serverless
container,
building
a
surface
container,
you're
responsible
for
what
goes
inside
that
container
right,
we
don't
we.
We
have
some
very
small
requirements
there.
A
Now,
when
you
transition
to
this
serverless
functions
model,
that's
where
you
get
this
extra
piece
of
code,
that
is
the
function
runtime
and
the
function.
Runtime
really
helps
you
as
far
as
implementing
this
http
server.
This
wrapper
around
how
you're
going
to
receive
those
events
and
how
those
events
are
going
to
be
sent
to
your
user
code
and
also
because
we
are
in
control
of
that
function.
Runtime
we
can
be
a
little
bit
more
opinionated
about
it.
A
So,
for
example,
maybe
there
is
something
specific
that
we
want
to
do
as
far
as
tracing
we
can
package
that
tracing
capability
in
our
functional
time,
where
in
a
container
you
can
still,
of
course,
have
some
choices
and
decide
to
choose
your
own
implementation
or
go
in
a
different
route
there,
so
that
would
that
would
be
the
the
difference
here
now.
Looking
at
most
solutions
in
the
market
today,
I
would
say
that
again,
quite
often
you
have
to
choose
between
one
or
the
other
and
they
have
completely
different
user
experiences.
A
I
think
the
main
difference
that
openshift
serverless
is
bringing
to
market
here
is
this
idea
of
learning
curveless
containers
and
functions
in
the
same
experience
right
you
have
the
same
exact
user
flow.
You
can
go
back
and
forth.
You
may
start
with
a
container
and
you
may
see
a
good
fit
for
benefiting
from
a
function
or
vice
versa.
B
Yeah-
and
I
I
think,
that's
a
great
quality
for
us
to
have,
because
if
we
look
at
how
folks
tend
to
use
functions
and
micro
services
that,
like
it's
very
common,
to
have
a
spectrum
of
things
that
maybe
you've
got
some
micro
services
that
that
evolve
out
of
functions,
and
maybe
some
micro
services
that
you
already
had
that
you
want
to.
You-
want
to
get
the
benefit
of
event,
activating
those
things
and
scale
to
and
from
zero.
But
you
also
have
things
that
you're
implementing
as
functions.
B
So
it's
it's
nice
that
we
treat
them.
Similarly,
because
there
is
that
interplay
and
back
and
forth,
and
evolution
of
systems
that
we
maybe
start
out
using
functions
and
evolve
to
microservices
or
decompose
a
microservice
into
functions.
A
Yeah
yep,
that's
super
powerful,
so
essentially
what
we
are
doing
with
with
functions
then
right.
We
want
to
make
sure
that
you
inherit
and
and
benefit
from
everything
k
native
already
provides.
So
what
we
are
doing
is
really
providing
a
plugin
to
kn.
Kn
is
the
cli
4k
native
and
that
plug-in
we
are
calling
it
fast
for
now
that
plug-in
allows
you
to
have
a
local
developer
experience,
which
again
is
super
important.
A
If
you
want
to
iterate
really
fast
and
you
may
not
have
access
to
a
cluster
or
to
the
cloud
all
the
time.
So
again,
you
can
have
a
local
build
experience
and
iterate,
but
when
you
build,
we
want
to
make
sure
that
the
way
you
are
producing
those
containers
is
also
standardized.
A
So
we
are
leveraging
buildbacks
for
that
and
we
are
already
providing
build
packs
for
three
runtime
out
of
the
box,
so
farquaad
note
and
go,
but
that
list
of
course,
will
extend
as
we
progress
on
our
journey
from
developer
review
to
technology
review.
But
once
you
build
those
containers
using
the
fast
cli,
you
can
then
of
course
deploy
and
when
you
deploy
they
become
k
native
services
right.
That's
again,
all
the
things
that
we
talked
about
here
for
serving
our
eventing.
A
A
You
can
implement
single
page
apps
or
things
of
that
nature,
but
the
one
of
the
most
powerful
use
cases
for
functions
and
servers,
of
course,
is
to
deal
with
events,
so
you
can
receive
cloud
events
with
your
functions
as
well,
and
we're
gonna
see
a
little
bit
of
that
experience
in
in
the
demo.
Now
that
I
I
pre-recorded
to
make
sure
I
could
talk
and
speak
actually,
I
could
talk
and
not
be
concerned
with
typing.
A
At
the
same
time,
let
me
start
sharing
my
screen
here
and
I
will
walk
through
that
and
then,
if
we
have
enough
time,
I
can
do
also
like
a
little
live
demo
of
our
console
as
well
so
I'll
hit
play
here.
So
I
have
an
empty
directory
and
the
very
first
command
we're
going
to
do.
Let
me
do
quick
pause.
There
is
kn
fez
init
and
I'm
going
to
specify
what
type
of
function
that
is
so
that
the
template
that
the
tool
will
generate
is
already
configured
for
that
particular
type.
A
So
it
could
be
events
again.
It's
going
to
then
receive
cloud
events
or
http,
and
then
the
dash
l
is
going
to
be
used
to
specify
what
programming
language
you
want
to
use
for.
The
runtime
in
this
case
we're
picking
node
so
now
triggering
a
build
again.
Is
kn
fest,
build
notice
that,
of
course,
I'm
not
specifying
any
particular
details
about
a
docker
file.
A
But
this
is
all
the
call
that
you
have
now
in
order
to
process
a
cloud
event,
not
that
the
build
is
done,
I'm
just
going
to
perform
a
deploy
and
on
the
left
side
you
see
the
openshift
console.
So
here
I
already
have
2k
native
services
running
one
quark
was
one
and
another
spring
application
and
they
are
connected
to
a
channel.
Remember
that
explaining
that
a
channel
is
this
path
that
is
going
to
carry
events
from
even
sources
to
your
application.
A
So
now
every
update
on
jira
or
every
event
that
this
timer
will
trigger.
They
will
land
on
this
microservice
that
is
built
in
quercus
on
this
spring
application.
That
is
also
another
microservice
and
on
this
function
that
we
just
deployed
using
the
function,
functionality
of
openshift
servers,
which
is
built
in
javascript
so
again,
very
short,
very
simple,
but
still
it's
a
very,
very
interesting,
very
powerful.
B
Now,
one
of
the
things
that
I
want
to
just
call
attention
to,
if,
if
we
can
just
pause
this
here,
is
that
if
you
look
at
the
channel
that
that
is
the
has
a
little
binary
type
of
text
on
it,
you'll
notice,
that's
an
in-memory
channel,
and
that
is
something
that
is
probably
workable
for
you
as
a
developer
in
production,
where
you
don't
want
to
have
the
chance
of
lost
events.
A
Yeah,
so
let's
take
a
look
at
the
topology
view,
and
now
I'm
going
a
little
bit
of
script
here,
but
just
to
show
you
a
live
cluster
as
well.
We
recorded,
as
I
could
probably
talk
and
do
the
demo
at
the
same
time,
but
first
thing
here.
A
So
this
experience
this
this
visualization
that
you're
seeing
here
is
really
the
way
we
are
showing
multiple
revisions
for
one
application,
and
in
this
case
you
can
see
that
this
application
this
this
particular
revision,
has
100
of
the
traffic
and
then
I
have
this
other
applications
here
that
are
essentially
representing
prs,
the
number
of
dpr
that
was
triggered
that
was
sent
and
that
triggered
a
pipeline
that
built
this
container
as
a
revision.
A
Here
they
all
have
zero
percent
of
the
traffic,
but
as
I
hit
those
urls,
you
see
that
they
will
start
from
zero,
but
they
all
have
unique
urls.
So
again,
this
is
the
14
pr
14.
This
is
the
pr15,
but
if
I
hit,
of
course,
the
main
url
for
the
service
that
is
going
to
trigger
the
one
that
has
100
of
the
traffic
now
for
this
experience
again,
you
can
of
course,
click
the
traffic
split
using
the
cli,
but
we
also
have
offer
a
way
for
you
to
do
that
using
the
ui
as
well.
A
So
I
can
say
you
know
what
I
think
this
one
here
would
take
50
of
the
traffic,
and
I
want
the
latest
vr.
That's
the
one.
I
think
it's
good
to
take
15
of
the
50
of
the
traffic,
and
now
you
see
so
now.
If
I
hit
the
url,
the
main
url
I'll
have
a
50
chance
to
get
any
of
these
particular
versions
of
my
application.
A
Now,
as
far
as
the
venting
and
to
to
build
on
what
paul
just
said,
whenever
you
are
creating
a
channel,
we
offer
an
experience
where
again,
you
can
just
select
in
memory
and
that's
going
to
create
a
one
in
memory
channel
or
you
can
select
kafka
right
and
now
you
need
to
inform,
of
course,
what
kafka
broker
you
want
to
use.
That
was
already
pre-configured
for
the
eventing
installation
in
this
cluster,
and
you
can
literally
just
specify
the
name
of
the
broker
here.
A
And
the
last
piece
of
share,
very
briefly
here,
is
the
even
source
experience
as
well.
These
are
some
of
the
even
sources
that
we
have
out
of
the
box.
Again,
you
can
select
kafka.
That's
of
course,
one
very
popular
one.
You
can
just
point
to
the
bootstrap
server
and
start
receiving
events,
or
you
can
pick
any
of
the
even
sources
powered
by
cam.
Okay,
so
let's
say
I
can
pick
sqs.
A
You
provide
the
the
configuration
here
for
now
this
experience
is
yamo.
We
are
working
on
that
to
make
sure
that
we
have
forms
auto
generated
for
event
sources
as
well,
but
it's
in
a
very
simple
configuration
just
of
course
your
keys
and
the
queue
name
for
sqs
hit,
create
and
that's
going
to
create
the
event
source
for
you.
A
So
I'll
do
one
for
kafka
here,
because
that's
of
course
one
thing
that
I
have
already
running
in
my
openshift
cluster
pick
the
consumer
group,
and
here
I'm
gonna
select,
so
I
could
use
a
uri,
so
this
could
be
any
destination
any
uri
that
can
receive
that
kafka
message
or
an
application.
In
this
case,
I'm
just
going
to
select
the
previous
application
that
I
built
and
hit
create.
B
Now,
one
of
the
things
that
I
I
thought
would
probably
be
good
to
just
disambiguate
in
case
there
was
any
question:
is
that
when
we
talk
about
the
kafka
source,
this
would
be
something
that
you
would
use
to
consume.
B
A
I
was
just
demonstrating
how
the
experience
would
be
again
if
you
want
to
connect
now
your
in-memory
channel
to
your
application
and
then
eventually
land
the
event
source
to
this
channel
here
as
well.
Now
I
am
running
four
point.
Five.
Ninety
four
point
six
nightly
build,
so
maybe
there's
something
here
that
my
drag
and
drop
is
not
doing
properly,
but
you
get
the
idea
right
I'll
go
back
to
the
slides.
Now
I
guess
one.
A
One
brief
thing
that
I'll
just
point
out
is
this
integration
that
we
did
with
pipelines
as
well,
which
again
it
allows
you
to
produce
a
attack
on
pipeline
out
of
the
box
whenever
you
are
importing
an
application
from
git
and
I'll.
Just
very
quickly
show
you
that
I
know
we
are
getting
close
on
time,
but
let's
say
I'll
pick
one
particular
application
here.
So
I'll
put
this
vanilla
spring
application
from
upstream
select
java,
select,
k
native
service,
and
now
here
I
can
select,
add
a
pipeline.
A
A
When
I
hit
create,
then
it's
going
to
then
start
a
build
and
eventually
right
you
see,
the
build
is
new,
the
build
will
be
updated
to
running
and
your
application
will
be
completed
here
and
when
I
go
to
pipeline,
you
see
that
there
is
a
new
pipeline
that
was
created
and
you
can
still
configure
that
pipeline
if
you
want
using
either
the
pipeline
builder
right,
adding
more
steps
to
your
pipeline.
If
you
want
or
editing
the
ammo,
so
very,
very
interesting
experience
and
again
very
easy
to
get
started
as
well.
A
B
Well,
I
always
have
trouble
calibrating
to
that.
It
doesn't
get
easier
when
there's
no
heads
nodding
in
the
room
that
you
can
kind
of
look
at,
but
I
think
I
think
we've
hit
folks
with
a
lot
of
information.
B
So
I'll
just
say
thanks
a
lot
for
watching
our
session,
everybody
and
hope
hope
that
you
can
check
out
openshift
serverless
and
the
k
native
project
we'd
love
to
have
you
contribute
love
to
have
you
involved
in
the
k-native
community
thanks
a
lot.