►
Description
Overview of OpenShift Serverless and Serverless functions coming in the 4.8 release of OpenShift
Guest Speaker: Naina Singh and Lance Ball (Red Hat)
A
A
A
A
Well,
hello,
everybody
and
welcome
to
another
openshift
commons
briefing
on
openshift
tv,
I'm
diane
mueller
and
I'm
really
thrilled
to
be
here
we're
continuing
our
series
of
talks
around
4.8,
the
latest
release
of
openshift,
and
today
we
have
with
us
naina
singh
and
lance
ball,
who
are
going
to
talk
about
openshift,
serverless
and
serverless
functions
and
what's
going
on
in
the
latest
release
and
what's
coming
down
the
pike.
So
I'm
going
to
let
the
two
of
them
introduce
themselves.
A
B
Okay,
namaste,
all
I
am
lena,
and
today
I
am
here
with
my
s,
team
calling
colleague
lance
ball.
That's
would
you
like
to
introduce
yourself.
B
And
today
we
are
here
to
talk
about
the
openshift,
serverless
and
openshift
serverless
functions
that
will
be
released
as
a
tech
review
with
openshift
4.8.
Only
the
functions
part
is
technically.
The
openshift
serverless
is
already
ga
and
ready
for
your
production
application.
So,
let's
get
on
with
our
show
and
tell
what
is
serverless.
B
We
see
lots
of
definitions
of
serverless
today
and,
most
importantly,
the
the
functions
which
are
snippets
of
code
known
as
functions,
and
that
is
just
one
of
them,
so
I
just
wanted
to
mention
and
start
this
series
today
with
saying
that
how
red
hat
sees
serverless
right?
It
is
a
deployment
model
where
you
can
run
almost
any
container
and
or
function
that
you're
going
to
talk
about
later,
without
worrying
about
the
cube
complexities,
with
an
additional
benefit
of
your
workloads,
scaling
up
on
demand
and
going
back
to
zero.
B
B
So
why
serverless,
as
we
all
know
that
technology
is
no
longer
a
novelty,
but
it's
like
just
turn
on
the
switch.
There
are
no
complexities
that
I
need
to
know
and
focus
on
the
value
creation.
What
matters
most
to
you
and
serverless
simplified
approach
to
kubernetes,
with
its
sensible,
defaults
and
dynamic
scaling,
gives
you
more
room
for
experimentation
more
granularity
and
make
the
business
as
usual,
once
upon
a
time
story
of
having
that
extensive
architecture
and
design
worrying
about
management
and
provisioning
capacity
planning,
and
all
that
right.
B
So
with
that
what
serverless
is
and
why
serverless
is,
I
would
like
us
to
take
a
little
bit
deeper
and
see
where
serverless
could
be
right.
So
the
first
thing
that
comes
to
mind
is
where
application
has
an
unpredictable
or
varsity
number
of
requests,
because
it
is
a
deployment
platform.
We
also
see
it
as
where
you
can
use
the
maximum
resource
utilization
and
reduce
your
carbon
footprint.
B
It
can
it's
event.
Driven
is
the
heart
of
serverless,
so
you
can
use
it
to
do
the
event
driven
architecture
that
will
make
your
apps
loosely
coupled
and
reactive
and
distributed
at
the
same
time
right.
Elevated
developer
experience
since
q
could
get
complex,
so
you
can
reuse
the
skills
of
your
existing
developers.
You
can
try
out
your
different
deployment
strategies
and,
and
things
like
that,
so
to
kick
it
off
after
we
get
this
brief
intro.
B
I
would
start
with
the
serverless
pattern,
so
at
its
core,
it's
simple,
an
event
has
occurred
and
it
triggers
your
application.
So
when
running
on
kubernetes,
that
means
starting
a
container
to
handle
that
event.
So
your
app
will
do
the
processing
and
produce
results,
and
this
event
could
come
from
a
variety
of
sources.
It
could
be
http
requests
coming
from
kafka,
slack
twitter,
you
name
it
and,
depending
on
how
many
requests,
your
application
would
scale
up
to
handle
the
demands
and
when
they
are
ideal
for
enough
time,
they
will
go
back
down
to
zero.
B
It
has
three
components:
the
serving
which
allows
your
containerized
app
or
function
to
deploy
a
scale
native
service
and
auto
scale
up
and
down
to
zero
right,
and
these
requests
are
what
becomes
request
driven
the
eventing.
That
is,
an
infrastructure
that
will
send
and
receive
events
that
will
trigger
your
application.
That
has
been
served
by
serving
and
the
last
of
the
command
line
that
allows
you
to
interact
with
these
constructs,
to
make
it
easy
to
create
resources
and
connect
with
the
served
applications,
and
you
don't
need
to
deal
with
yaml
at
all
right.
B
It
comes
with
one
click,
install
experience
with
an
operator
and
provides
you
date
of
experience
with
the
monitoring
and
updates.
There
are
no
external
dependencies
and
the
user
experience
in
addition
to
cli
has
been
augmented
by
providing
developer
and
admin
experience
on
the
dev
console,
and
we
will
see
this
in
our
in
our
demo
today.
B
One
more
concept
I
would
like
to
cover,
because
we
are
going
to
hear
it
a
lot
when
we
talk
about
serverless
is
the
cloud
event.
So
when
the
event-driven
architecture
is
the
key
for
our
modern
day
challenges,
and
when
everything
is
in
event,
it
raises
the
need
for
the
consistent,
accessible
and
portable
events.
B
Serverless
uses
the
cloud
event
specification,
which
is
a
cncf
project,
and
this
describes
the
events
in
a
common
way
for
a
message
that
could
be
understandable
by
all
your
disparate
and
heterogeneous
component
of
your
solution
running
wherever
right,
because
hybrid
cloud
is
the
is
the
key
for
a
hat
one.
More
thing
that
I
would
cover
is
eventing.
B
So
since
we
are
talking
about
the
event
driven
architecture,
what
it
does
is
this
is
the
creative
eventing
is
the
infrastructure
for
consuming
and
producing
events
that
will
trigger
your
serverless
application,
and
the
most
basic
component
is
the
event
source
which
will
provide
that
mechanism
where
event
providers
connect
your
application
and
send
events
right
and
ucncf.
B
So
in
this
topology
the
event
source,
which
is
connected
to
any
number
of
event,
sources
that
is
coming
from
that
you
can
see
in
the
screen
get
connected
to
a
broker
or
a
channel
their
broker
a
channel.
We
can
think
of
them
as
event
mesh
could
be
a
sync
or
you
can
connect
them
directly
to
your
application,
where
they
can
trigger
them
and
on
the
on
the
right
hand,
side,
you
see
the
revision
and
the
traffic
spread.
B
That
is
the
deployment
strategy
that
the
serving
part
that
we
were
talking
about
and
we'll
see
it
in
our
in
our
demo
as
well
and
after
this
concept,
it
brings
me
to
the
serverless
functions
so
that
we
have
added
as
tech,
preview
and
openshift.
I
just
wanted
to
remind
you
and
we
would
love
for
you
to
try
it
out
and
give
us
feedback
so
when
every
container
can
run
in
serverless
fashion,
what
does
serverless
function
bring
to
the
table
right?
B
We
call
it
a
programming
model
that
lets
you
focus
on
just
value
creation,
right,
it's
a
single
responsibility,
event
driven
k
native
service.
Here
you
don't
need
to
focus
on
the
building
api
framework
or
figuring
out
what
api
framework
or
how
to
build
a
container
or
do
the
configuration
on
how
to
deploy
a
container,
how
to
do
readiness
check
and
how
do
I
open
a
port
and
things
like
that?
You
can
locally
test
it
locally,
develop
it
and
you
can
deploy
it
into
your
cluster
with
just
one
command.
B
So
in
the
real
world,
we
believe
the
solutions
are
going
to
be
mix
of
legacy,
monolith,
modular
monolith,
what
you
want
to
call
them
or
microservices
services
and
functions
connected
together
to
deliver
the
value
right.
I
would
like
to
note
here
that
red
hat
builds
the
upstream
project
boson
on
which
the
serverless
functions
are
based
on.
We
have
donated
it
to
the
k
native
community
and
we
are
very
excited
to
take
it
further
and
define
this
as
industry-wide
standard.
B
B
You
want
me
to
stop
sharing.
Would
you
like
to
share.
C
Okay,
all
right,
so
thanks
nina
for
that
introduction.
I
appreciate
that
and
I
assume,
if
there's
any
issue
with
seeing
my
screen,
someone
will
speak
up
so
right,
serverless
functions,
so
nana
talked
a
lot
about
sort
of
the
programming
model
and
you
know
the
cli
that
will
allow
you
to
create
a
new
project
and
deploy
that
project
to
your
cluster.
C
You're
gonna
see
all
that
in
a
few
minutes
with
the
demo,
but
before
we
get
into
that,
I
just
want
to
kind
of
quickly
give
a
brief
overview
of
what
the
technologies
are
in
openshift
serverless
functions,
so
it
consists
of
basically
four
things:
we've
got
a
we've
got
build
packs
that
know
how
to
take
a
function
project
and
turn
that
project
into
a
container
that
can
run
in
the
cluster.
C
We,
you
invoke
all
of
this
stuff
through
a
plug-in
in
the
k-native
cli
and
we'll
see
that
we
have
runtimes
that
invoke
the
functions,
and
then
we
also
have
templates
for
the
function.
Project
creation,
the
architecture
kind
of
looks
a
little
bit
like
this.
So
we've
got
the
funk
binary
this
plug-in
here,
and
it's
got
these
templates
built
into
it.
When
the
user
types
funk
create,
a
new
project
is
created
when
they
type
funk
build.
C
C
Project
combines
it
with
some
build
packs
and
you
have
a
resulting
image
and
then
deploy
takes
that
that
resulting
image
pushes
it
to
a
container
registry
and
then
ultimately,
to
the
cluster,
creating
what
is,
in
the
end,
a
k-native
service.
So
a
function
in
openshift
serverless
looks
just
like
any
other
k-native
service,
so
you
can
feed
event
sources
into
that
function,
as
you
would
for
any
event
see
in
k
native.
C
You
know,
a
function
can
communicate
with
the
event
brokers
and
also
with
other
services,
and
so
we'll
see
all
of
that.
I'm
gonna
do
a
demo
now.
So,
let's
take
a
look:
let's
go
over
here
to
the
terminal,
real
quick.
C
C
No
worries,
okay,
so
now
I've
got
this
project
and
I
can
cd
into
project
directory
and
if
I
look
at
it
looks
like
just
about
any
other
typescript
project.
We've
got
a
source
directory
that
contains
that
contains
my
index.ts,
which
is
my
function
and
let's
take
a
look
at
what
that
function
looks
like
here.
We
go
so
it
looks
just
like
kind
of
any
other
typical
typescript
function.
C
It
accepts
a
context,
object
and
a
cloud
event
object
and
returns
a
message,
and
in
this
case
we're
just
checking
to
see.
Is
there
a
cloud
event?
If
there
is,
we
return
an
error,
not
we
just
log,
I
mean
if
there
is,
if
there
is
a
cloud
event,
we
log
it
to
the
cli.
So
let's
just
see
what
that
looks
like,
let's
run
it,
I
can
do
kn
func
build
and
use
the
minus
v
flag
to.
So
you
can
see
all
the
output.
C
I
give
it
a
registry
destination,
which
is
my
personal
registry,
and
it
will
eventually
push
that
image
to
the
registry,
and
you
can
see
that
we're
using
build
packs
to
create
a
function
image
and
then
I
can
run
it
with
kn,
conc,
prime,
and
now
it's
just
running
locally.
So
how
do
I
test
this?
C
Well,
I
could
use
curl
or
something
like
that,
but
we
have
a
little
tool
with
with
the
the
plugin
that
allows
you
to
test
locally,
so
I
can
run
k
and
funk
emit
and
give
it
some
data
hello
world
basically
and
then
give
it
a
sync
of
local,
which
is
kind
of
a
special
flag
to
say,
send
this
event
to
my
function
running
locally,
and
you
can
see
now
that
the
function
received
that
event
and
is
printing
it
out.
C
Just
like
you
saw
in
the
code,
I
can
do
an
npm
install
and
do
tests
right
tests
are
built
in
npm
test
and
we
can
see
that
it
does
some
linting
and
runs
a
few
unit
tests
and
a
few
integration
tests
and
everything
passes
as
it
should
and
that's
pretty
neat.
So
it's
a
viewer
right
and
I
want
to
deploy
this
so
I'm
already
logged
in
to
my
kubernetes
cluster.
So
I
can
just
run
pay
and
func.
I
can
type
it
deploy
and
it
will
build
the
image
again.
C
There
is
a
flag
that
I
should
have
provided
to
tell
it
not
to
build
that
image
again
but
okay,
so
it's
going
to
build
the
image
and
push
it
up
to
my
kubernetes
cluster
and
you
can
see
that
it's
now
deploying
the
function
to
the
cluster.
So
let's
go
take
a
look
at
that.
I've
got
my
topology
here
and
here
it
is.
I
already
had
a
trigger
in
the
system
created
for
it,
so
this
is
just
a
little
trigger
that
does
all
right.
C
My
function
will
receive
messages
of
type
telegram.message
that
are
happen
to
be
in
the
k-native
eventing
system,
and
you
can
see
here
on
the
left
prior
to
the
call
I
created
or
prior
to
this
demo.
I
created,
what's
known
as
a
camelet.
It's
part
of
the
camel
k
project
and
this
is
a
telegram
cam
one.
C
So
I
have
a
telegram
bot
and
I
can
send
messages
to
it
and
this
camelet
receives
it
and
pushes
those
as
cloud
events
into
the
k
native
event
broker
and
the
function
that
I
just
created
and
deployed
will
respond
to
those.
So
I
can
say
hello
here.
I
am
talking
to
the
bot
in
telegram
and
if
I
look
at
the
logs
for
this,
we
should
see
that
it
received
a
cloud
event.
Yes,
a
cloud
event
of
type
telegram
message,
and
I
can
say
hello
again
and
we
can
see
the
new
cloud
event
arrived.
C
That's
pretty
neat,
so
that's
that's
all
pretty
cool!
It
shows
you
how
quickly
and
easily
you
can
take
a
a
a
typescript
function,
create
it
from
scratch,
get
it
deployed
into
your
cluster,
really
in
just
under
a
minute.
If
I
hadn't
been
talking
so
much,
it
would
have
taken
just
so.
One
thing
that
I
will
show
you
here
is:
how
do
we
get
rid
of
it?
Let's
say
we
want
that
function
to
go
away,
I'll,
do
knfunk
delete
and
it
doesn't
delete.
C
What's
on
my,
you
know,
local
disk,
but
it
will
remove
that
k
native
service
from
my
cluster.
Sometimes
it
takes
a
minute
to
reconcile,
but
that
will
go
away
eventually,
while
we're
waiting
for
that.
So
I
want
to
do
something
similar
to
what
I
just
did
with
that
telegram
bot,
but
I
want
to
sort
of
have
some
some
cool
interaction
between
a
bunch
of
different
services.
C
I've
got
a
project
called
telegram
image
analysis
and
it's
got
a
few
different
functions
associated
with
it
here
one
is
called
a
receiver,
one
is
called
processor
and
one
is
called
responder
and
the
way
this
works
is
that
the
receiver
receives
telegram
messages,
just
like
the
viewer
that
that
we
just
deployed,
and
then
it
checks
to
see.
Does
that
telegram
message
have
an
image
in
it?
If
it
does,
then
I
want
to
deploy
that.
C
C
Okay
and
then
the
processor
will
take
any
telegram
image
and
call
out
to
the
microsoft
faces
api
to
examine
that
image
and
look
for
any
faces
that
happen
to
be
in
it.
C
And
then
the
responder
will
respond
to
the
bot.
So
let's
do
that.
The
first
thing
we
need
to
do
is
I'm
just
going
to
deploy
these,
and
then
we
can
talk
about
what
what
these
different
functions
do
as
I,
okay,
so
I'm
going
to
deploy
the
responder
first,
I
suppose-
and
you
see
I
can
give
it
a
directory
name
so
with
the
minus
p
flag
and
so
we're
deploying
the
responder.
C
Yes,
well
do
I
have
an
image
for
that.
I
may
not
have
an
actual
image
for
that,
but
I
can
show
you
what
the
project
looks
like
from
the
code
perspective.
If
that
works,
for
you.
C
Each
of
these
is
a
function.
We've
got
the
processor,
the
receiver
and
the
responder.
The
receiver
is
a
go
function.
We
support
go,
it's
a
function
that
again
receives
a
cloud
event
and,
as
I
said,
checks
to
see
if
it's
a
telegram
message
that
has
only
text
or
a
telegram
message
that
has
only
an
image.
C
C
Another
thing
I
wanted
to
show
is
configuration
for
your
functions
right
so
a
lot
of
times
when
you're
writing
an
application.
You've
got
some
secrets
that
you
that
you
want
to
not
have
checked
into
github
like,
for
example,
api
keys.
If
you
have
an
api
key
for,
for
example,
a
telegram
bot,
you
don't
want
to
check
that
in
because
that
could
you
know
be
seen
out
in
the
real
world.
C
So
what
we
have
is
the
ability
to
specify
things
like
environment
variables
in
this
configuration
file
called
func.yaml
and
func.yaml
is
the
primary
configuration
file
for
all
of
of
for
all
of
the
function
projects,
and
it
specifies
things
like
what
your
build
pack
builder
is.
As
I
said,
you
can
specify
environment
variables,
you
can
add
annotations
like
regular,
kubernetes
annotations
and
you
can
mount
volumes
from
kubernetes.
I
won't
be
showing
you
that
today
exactly
but
okay,
we've
done
the
receiver
and
the
responder.
Let's
do
the.
C
So
the
receiver,
as
I
said,
is
a
go
function.
The
processor
is
written
in
quarkus
and
again
this
one
is
kind
of
nice,
because
all
of
the
apis
are
just
a
little
bit
different
they're
idiomatic
to
the
language
or
the
runtime
that
they're,
a
part
of,
and
so
with
qarcus.
What
you
get
is
a
a
a
function
that
can
that
has
this
annotation
at
funk
on
it
and
it
maps
cloud
event,
types
to
that
function
and
accepts
input
and
again
receives
a
cloud
event.
C
C
So
now
I
can
add
a
trigger
to
filter
all
messages
coming
from
the
broker
that
are
of
telegram
dot
message
and
those
will
be
sent
to
the
receiver.
C
C
And
then,
in
some
cases
we
can
do
things
like
like,
like
use
yaml
files
to
deploy
our
triggers
and
I'll
do
that
for
the
responder.
C
So,
as
you
can
see,
the
little
blue
circles
have
gone
away
here
for
the
receiver
and
the
processor.
That's
because
we're
in
a
serverless
environment
and
these
functions
have
spun
down
to
zero.
So
a
native
has
determined
that
these
functions,
you
know,
haven't
been
receiving
any
traffic
and
so
in
order
to
conserve
resources,
they've
spun
down
to
zero,
but
we
got
a
pretty
good
response
here
from
the
responder
very
quickly
said.
Send
me
an
image
with
faces
in
it,
and
I
will
analyze
it
for
you.
C
So
I
can
pull
up
an
image
here
and
send
it
and
we'll
see
that
now
the
so
what
happened
was
the
telegram
source
received
that
telegram
message?
Reformatted
it
sent
it
to
the
broker.
C
The
broker
sent
that
as
a
type
telegram
dot
image
to
the
processor,
the
processor
called
out
to
the
microsoft
faces
api
and
sent
a
response
back
to
the
broker,
which
the
responder
then
picked
up
and
responded
with
information
about
what
what
it,
what
it
detected
in
the
image,
a
very
happy
person,
and
so
I
think
that
about
covers
it.
This
is,
I
think,
a
really
fun
demo.
C
I
love
to
do
this
because
I
think
it
shows
some
of
the
cool
ways
and
creative
ways
that
you
can
take
event
sources
and
tie
them
together
to
you
know
in
this
case
it's
just
kind
of
a
fun
little
application,
but
you
can
certainly
see
how
events
can
drive
processes
within
your
business
with
serverless
function,
and
so
I
think
that
is
about
it.
For
me,.
B
So
while
we
are
seeing,
if
there
are
any
questions
or
not,
let's
I'm
gonna
act
as
an
audience
stand
in
and
I'll
ask
you
questions.
C
B
So
what
you
showed
us
a
couple
of
commands
with
the
with
the
cli
right,
the
create
run,
deploy
and
the
image
one
that
was
really
cool,
that
we
can
just
test
it
without
using
curl
or
have
event
source
at
my
local.
What
other
things
I
can
do
with
the
k
and
trunk
plugin.
C
Well,
you
can
run
it
locally.
I
think
I
did
show
that
you
can
deploy
you.
It
has
a
an
ability
to
set
up
configuration
for
your
application
through
an
interactive
ui,
a
text-based
ui,
so
it
will
examine
your
kubernetes
cluster,
for
example,
config
maps
that
may
be
available
to
you
and
allow
you
to
set
those
config
map
properties
within
your
function.
Configuration
and
volume
mounts
are
similar.
B
C
A
B
A
Certainly
grateful
for
that,
I'm
wondering
if
you
can
tell
us
a
little
bit
about
what's
coming
down
the
pike
what's.
This
is
obviously
we're
we're
looking
at
the
latest
and
greatest
stuff,
but
we've
got
you
know
we're
always
you
know
heading
forward.
What
what
do
you
see
on
the
road
map
for
four
nine,
four,
ten
and
and
what's
coming
down
the
pipe.
B
For
for
functions
specifically,
what
we
see
is,
of
course,
a
ga
around
410
time
frame,
the
one
very
specific
thing
that
I
would
like
to
call
out
for
functions.
Ga
is
the
on
cluster
build
right.
Now
we
just
have
the
local
build
available,
because
we
made
the
local
development
a
priority
for
our
tech
preview,
so
the
on
cluster
build
so
that
it
could
plug
into
your
pipelines
and
all
those
stuff
right
and
because
most
of
the
enterprises
are
more
secure
and
want
to
have
that
control
over
right.
B
So
that
is
going
to
be
one
of
the
features,
the
typescript
that
lance
has
shown
it
is
available,
but
I
think
officially
it
would
be
in
our
1.70
release.
I
believe-
and
then
there
is
rust
on
the
horizon
for
that
right
and
we
are
also
working
on
the
ide
experience,
so
you
can
have
through
the
ide.
You
would
be
able
to
create
a
function,
deploy
a
function
and
be
able
to
do
all
this
all
this
stuff
in
an
id.
B
So
that
is
about
the
serverless
functions
that
I
could
think
of
right
now
for
serverless
itself.
What
we
are
looking
into
is
again
security.
That
is
paramount,
so
do
the
encryption
and
to
an
encryption
of
k
native
services,
things
like
that
with
service
match
without
service
smash.
B
We
are
also
looking
into
the
cold
start
improvement
because,
as
you
know,
when
the
all
the
containers
has
gone
down
and
when
they
have
to
come
up,
there
is
that
delay,
and
so
we
are
looking
into
the
kubernetes
and
all
that
stuff
to
figure
out
for
for
eventing
right
now
we
have,
we
offer
kafka,
canadian
kafka
event
source
that
it
can
do,
but
it
offers
only
channel
at
the
moment.
So
we
are
going
to
offer
creative
broker,
that's
going
to
be
in
4.10.
B
We
are
hoping
4.9,
but
it
will
probably
be
4.10,
so
those
are
the
the
big
items
other
than
that,
the
api
gateway
story
and
all
that
stuff
they're.
Looking
in
the
meantime,
we
are
working
with
the
with
the
openshift
developer
console
to
have
as
much
as
we
can
the
part
of
the
the
console
ui,
the
cool
ui
that
you
just
mentioned.
B
We
nobody
wants
to
deal
with
the
yaml.
So
that's
why
our
focus
is
always
on
what
as
much
as
you
can
do
with
the
cli
and
then
the
ui
right.
So.
A
Lance,
what
what
do
you
think
is
the
coolest
new
feature?
That's
come
that
people
should
check
out
in
the
4.8
release
that
we've
been
yammering
about
today,
that
you
want
to
want
feedback
on
yeah
thing
that.
C
Well,
I
didn't
show
it,
but
that
interactive
user
experience
to
sort
through
volumes
and
config
maps,
I
think
is-
is
actually
really
powerful
and
we
should
probably
add
that
to
the
demo.
It's
such
a
new
feature
that
I
actually
don't
have
it
included
as
part
of
the
demo
that
I
like
to
do
so
that
that
is
probably
the
biggest
thing.
I
personally
am
also
just
excited
about
the
typescript.
C
B
Another
thing
that
I
wanted
to
ask
lance
to
show
us
that,
like
like
I
mentioned
you,
don't
have
to
worry
about
the
liveliness
pro
or
all
those
things
like
all
these
things
that
gets
done
for
you.
You
don't
need
to
worry
about
container
creation.
You
don't
need
to
worry
about
anything
else,
so
we're
really
excited
to
put
this
in
user's
hand
and
collect
as
much
feedback
as
we
can
so
my
perspective,
the
whole
serverless
function,
part
is,
I'm
really
excited
about
for
4.8.
A
Well,
I
hope
so
because
you're
the
pm
for
it,
so
if
people
want
to
reach
out
what
is
the
maybe
share
the
screen
where
the
best
landing
page
is
to
find
get
more
information
or
to
give
you
feedback
what
resources
do
they
have
to
get
started
here?.
B
So
one
thing
that
I
also
mentioned
that
we
were
working
on
an
upstream
project
goes
on,
on
which
the
serverless
functions
is
based
on
right
and
we
have
recently
donated
it
to
the
k
native
sandbox
community.
So
we
can
share
the
both
url.
We
are
still
maintaining
the
boson
until
the
the
move
is
completed
and
that's
a
good
place
to
reach
us
and
going
to
be
the
fastest,
because
you
can
reach
the
whole
team
there
other
than
that.
B
We
have
our
support
channels
that
anybody
can
reach
us
to
or
we
have
our
documentation
because
it
is
tech
tv.
So
we
have
provided
documentation
for
it
and
I'll
share
the
link
in
a
minute,
and
there
is
serverless
dash
interest
at
redhat.com
is
an
email
that
that
could
be
reached
out
for
feedback
to
us.
A
So
this
boson
project,
you've,
have
you
officially
donated
to
the
k
native
group.
B
C
A
A
A
And
and
one
person,
while
we're
doing
waiting
for
him
to
do
that
is
asking-
is
openshift
serverless
existing
in
the
code
ready.
C
B
B
C
Okay,
so
I
I
want
to
give
a
little
bit
of
history
first,
because
I
am
very
excited
about
this,
and,
and
the
history
is
part
of
why
I'm
so
excited
about
it.
C
We
about
a
year
ago
engaged
with
the
k-native
community
as
part
of
a
larger
industry-wide
effort
to
determine
how
k-native
might
move
forward
with
functions,
and
there
was
there
was
a
lot
of
disconnect
during
that
period
and,
ultimately,
that
prod
that
whole
effort
was
abandoned
and
we
got
very
excited
about
the
the
concept
and
the
idea
of
bringing
this
technology
to
our
openshift
developers.
C
So
we
continued
to
work
on
it
just
on
our
own
and
cut
to
a
year
later.
We've
got
this
technology
that
we
think
is
really
great
and
we
share
it
with
some
folks
in
the
k
native
leadership
and
it
receives.
You
know
we're
now
gonna
be
part
of
the
k
native
sandbox.
It
will
be
the
de
facto
functions
experience
for
k-natives,
so
very.
C
Probably
this
afternoon,
this
repository
will
will
move
over
to
the
k-native
sandbox
organization.
Right
now,
it's
part
of
the
buzz
on
project
organization,
but
you
know
github
is
really
good
with
redirects.
So
if
you
go
here,
you'll
end
up
in
the
right
spot.
This
is
the
code
for
our
cli,
but
the
boson
project
consists
of
a
lot
of
different
technologies.
Like
I
said
in
the
in
those
slides,
so
we
do
have
our
build
packs
here,
as
well
as
jsruntime
is
the
function
invocation
framework
that's
used
for
typescript
and
node.js
functions.
B
And
prototyping
some
rust.
C
Runtimes
as
well,
so
that's
the
bozon
project
sort
of
the
history
of
it.
We,
the
bozon
project,
started
as
a
result
of
that.
Those
efforts
within
the
k-native
community
about
a
year
ago
and
has
now
come
kind
of
full
circle.
A
Well,
that
is
awesome,
and
that
is
the
red
hat
way.
We
we
love
to
to
get
our
stuff
out
there
in
the
open
and
keep
it
there.
So
this
is
this
is
great
and
it
actually
thank
you
for
the
update,
because
I
it
wasn't
on
my
radar,
so
I'm
I'm
thrilled
about
that
and
that'll
be
another
thing
we
can
pull
out
in
the
next
k-native
upstream
openshift
commons
briefing,
so
that
that'll
be
that
will
be.
B
Wonderful
yeah,
working
with
simon
for
for
a
dedicated
show
on
the
on
this
journey
itself
from
bozon
to
to
canada
centers,
and
I
think
this
is
going
to
be
really
interesting.
A
Yeah,
I
think
that
would
be
that
be
a
great
talk.
So
again,
maybe
if
I
I
can
coerce
nina
into
sharing
her
screen
and
to
where
we're
on
the
pantheon
of
red
hat
landing
pages
on
our
sites,
where,
where
is
the
best
place
for
people
to
go
to
to
learn
more
about
serverless
and
openshift?
B
B
A
B
Well,
I
am
gonna
share
my
screen,
one
second,
and
I
know
that
things
have
changed
a
little
bit
on
how
we
have
our
documentation
now
so.
B
A
B
B
So
if
you
can
see
my
screen,
so
this
is
the
openshift
documentation
page
that
we
have
right
now
right
we
come
here.
This
is
docs.openshift.com
and
we
select
the
version
because
we
have
tons
of
version
out
there.
The
latest
is
4.7
for
now.
As
you
know,
the
4.8
is
still
in
the
works,
so
select
it
and
once
you
are
under
open
shift,
ocp
platform,
docs
serverless
is
part
of
it
so
follow.
Unless
is
here
service
mesh
and
all
of
them
are
here
so
serverless
documentation
is
part
of
the
larger
openshift
documentation.
B
So
you
have
just
one
place
to
go
right
and
then
once
you
go
here,
it
starts
with
release,
notes
right:
the
getting
started
guide,
the
serving
eventing
parts
that
we
talked
about
the
event
sources
that
we
have,
what
kind
of
event
sources
in
built
event,
sources
and
supported
event,
sources
that
we
offer
and
I'm
going
to
ask
lance
also
too,
if,
if
he
has,
because
I'm
thinking
he
has
the
camelot
installed,
he
can
show
us
with
the
help
of
camel
k,
how
many
event
sources
opens
up
for
for
you
to
be
able
to
connect
to
serverless
applications
right
and
then
here
the
function
part
is
here.
B
Of
course
there
is
this
message
about
that:
it's
a
technology
preview
feature,
and
then
it
talks
about
how
you
can
get
started
with
it.
What
are
the
prerequisites
and
how
how
to
build
it
and
the
another
important
part
we
also
do
is
language
specific.
So
if
you
are
developing
a
node.js
function,
what
could
you
achieve
from
the
metadata
file
that
we
provide
and
the
boilerplate
code
that
we
are
providing
right?
It
could
do
what
can
be
done.
What
are
the
return
values?
How
do
you
return
a
handle?
B
So
it's
like
all
the
programming
needs
that
you
have
for
the
code
that
you're
providing
it
could
it
could
give
it
to
you
right.
We
are
in
the
process
of
making
new
release
serverless1.16
that
is
going
to
be
on
4.8,
so
the
docs
are
going
to
be
in
a
flux
for
a
couple
of
days
once
we
get
it
settled,
but
docs.openshift.com
is
a
perfect
place
for
you
to
start
looking
into
the
dos
okay.
We
don't
have
a
direct
live
feedback
from
the
documentation
part.
A
B
So
lance,
if
you
have
a
cluster
up,
if
you
would
like
to
show
the
operator
installation,
if
not
then
so,
on
the
open
shift
console
there
is
in
when
you
are
in
admin
perspective.
There
is
operator
and
install
operator,
so
this
operator
is
available
on
operator
hub,
so
you
just
type
in
openshift
serverless
and
it
shows
up
and
then
you
just
click
install
and
it
will
automatically
up
you.
You
have
a
choice.
Whether
you
want
to
do
an
automatic
update
or
a
manual
update.
A
C
Okay,
so
you
should
be
seeing
my
console
here:
I've
got
the
developer
view
up
to
install
operators
you're
going
to
want
to
be
administrator
view
and
here's
the
operators
section.
You
can
look
at
the
operator
hub
to
see
what
operators
are
available
and
usually
I
just
it's
nice
and
there's
red
hat
openshift
red
hat
openshift
serverless.
You
can
see
that
it's
already
installed
here
in
all
operators.
I've
got
the
camel
k
operator
as
well.
C
That's
where
the
cam,
the
telegram
event
source,
came
in
so
yeah.
This
is
the
the
operator
I
and
nana.
I
know
you
asked
me
to
show
all
of
the
camlet
event
sources.
I'm
on
a
this
is
actually
an
old.
This
is
a
4.6.
B
Doesn't
show
yeah
so
that
is
going
to
be
part
of
4.8
where
it
actually
shows
up,
and
you
know
it's
hard
to
get
the
4.8
cluster
right
now
so,
but
all
those
event
sources
like
300
plus
event,
sources
that
camel
play
provides,
show
up
there.
Some
of
them
are
supported.
Some
of
them
are
in
check
review
with
you,
but
they
are
available
and
you
can
play
around
with
them.
A
Yeah
we
have
a
couple
of
past
open
commons
briefings
on
on
camel
k
and
kafka,
and
all
of
that,
so
if
you're
looking
for
them
just
go
through
the
briefings
list
and
search
on
camel
and
kafka,
and
you
will
recently
we've
just
done
a
whole
series
of
them
on
on
that
a
little
while
ago.
A
So
there's
lots
of
good
content
out
here
and
I
think
you
guys
have
done
an
amazing
job,
showcasing
the
serverless
side
of
openshift
and
its
functions,
and
so
I'm
really
appreciative
of
you
taking
the
time
to
do
this
and
all
the
work
that
goes
into
this
and
how
passionate
you
both
are
about
this
topic.
And
now
I
can
see
why
you're
the
pm
yeah.
Why
you're
the
functions
architect?
I
love
the
title.
A
So
thank
you
both
for
joining
us
here
today
and
everybody
who's
been
out
there
watching
thanks
very
much
for
taking
the
time
to
spend
and
spending
it
with
us.
Today
we
will
have
more
4.8
updates
coming
more
deep
dives
like
this
one,
on
different
aspects
coming
up.
A
So
if
you
check
out
the
events
calendar
on
commons.openship.org,
you
can
find
those
and
if
there's
a
topic
we
didn't
cover,
let
me
know
reach
out
to
us
and
find
us
on
our
slack
channels
or
wherever
you
are
reaching
out
to
the
commons
folks,
and
I
will
endeavor
to
find
someone
to
talk
about
it.
A
So
thanks
again
guys
and
have
a
wonderful
day,
I'm
looking
forward
to
the
4.8
release
party,
almost
there
it's
going
to
be
on
zoom
and
it's
going
to
be
virtual,
but
I'm
still
looking
forward
to
it
all
right
guys.
Thank
you
so
much
thank.