►
From YouTube: Developer Experience Office Hours: Serverless Scenarios
Description
Join OpenShift's Developer Experience experts for our regularly scheduled program filled with cloud native, Kubernetes, and OpenShift tips and tricks for developers.
A
Good
morning,
good
afternoon,
good
evening,
welcome
to
another
edition
of
the
developer
experience
office
hours
here
on
openshift
tv
I
am
chris
short
executive
producer
of
openshift
tv
and
technical
marketing
manager
here
at
red
hat,
I'm
joined
by
two
of
my
favorite
red
hatters
today
from
our
developer:
evangelism
team,
ryan,
jarvin
and
the
one
and
only
natalie
vento.
How
are
you
all
doing
today.
B
Hey
hey
doing
great
yeah
as
usual,
we
would
love
to
hear
your
thoughts
on
topics
for
the
show.
If
you
have
anything,
you're
super
interested
in
seeing
definitely
mention
it
in
chat.
We
also
have
a
feedback
form
which
I
may
as
well
paste
a
link.
I
got
a
a
new
shortened
url
for
the.
C
B
Join
the
chat
here
and
topics,
let's
see,
topics
welcome
all
right.
I
just
posted
a
link
to.
C
B
Dev
x,
feedback
camel
case
capitalization,
so
capital
d,
e
v,
capital
e
x,
p,
capital
f
for
feedback,
give
that
link
a
click
and
let
us
know
what
topics
you'd
like
to
see
here
in
the
developer.
Experience
office
hours
natalie
great-
to
have
you
here
today
to
help
cover
today's
topic.
We're
going
to
go
through
some
serverless
scenarios
from
learn.openshift.com
feel
free
to
drop
questions
on
relative
to
that
topic
or
other
topics
into
chat
and
we'll
do
our
best
to
try
to
handle.
D
Sure
that's
pretty
interesting.
Thank
you.
It's
always
a
pleasure
to
join
you
in
this
developer
experience
office
hour
good
morning
and
good
afternoon.
Everyone
here,
it's
kind
of
5
p.m.
I
guess
the
hours
change
from
your
side,
brian
ryan.
I
think
it's
8
00
a.m.
From
your
side,
so
yeah.
B
A
Yeah,
no
it's
I
mean
it's
11
here
and
it
still
feels
early
for
me,
sometimes
so
yeah
so
we're
here
to
talk
about
and
I
wanna
sorry
hang
on
two
things.
Ryan.
I
can
make
you
an
even
fancier
shortened
link.
We
can
talk
after
the
show.
Oh
cool.
C
A
And
then
I
want
to
point
out
that
joel
lord
is
in
the
house,
everybody.
So
if
you
got
your
front
end
framework
questions
like
tee
those
up,
now's
a
good
time
for
that
and
then
also
shout
out
to
jp
dade,
who
has
been
in
firmwell
firmware
hell
for
the
past
14
hours.
Thank
you
for
joining
us.
I
hope
this
is
a
good
break
for
you.
B
That's
the
main
topic
today,
and
so
I
personally
have
have
tried
to
scope
in
my
my
area
of
focus
on
on
kubernetes
and
a
couple
years
ago
I
had
a
lot
of
folks
say
talking
a
lot
about
istio
to
me
and
being
like
hey
istio
is
the
hot
new
thing
you
really
gotta,
get
up
to
speed
on
istio
to
see
the
future
of
kubernetes,
and
then
I
also
heard
from
other
folks.
B
Yeah,
like
any
good
project,
so
I
kind
of
tried
to
focus
down
on
just
what
is
relevant
and
useful
to
me
under
my
kind
of
traditional
developer
scope
of
responsibilities
and
tried
to
take
a
look
at
the
kubernetes
api
and
figure
out.
What
are
the
minimum
number
of
resource
types
that
that
I
need
to
be
informed
of,
and
so
deployments
was
one
that
I'm
like.
Okay,
I
need
to
deploy
a
container.
I
need
to
know
what
deployments
are
I
need
to
route
traffic
to
the
container?
B
I
need
to
know
what
services
are
right
and
then
other
higher
order,
concepts
that
were
optional
or
not
always
included.
I
kind
of
was
like
I'm
gonna
gloss
over
this
until
it
becomes
more
of
a
mainstream
usable
for
everyone,
type
of
topic,
and
so
k
native
was
kind
of
in
that
group
of
things
where
I
was
like,
unless
it
is
something
that's
applicable
towards
a
major
proportion
of
the
audience.
B
Maybe
I
can
skip
it
short
term
well.
I
have
recently
come
up
to
speed
well
partially
up
to
speed
on
on
k
native
as
a
field
of
developer
facing
work,
and
I
think
there
is
a
lot
in
there
that
really
extends
the
experience
for
developers.
B
So
there's
new
resource
types
that
you're
going
to
need
to
learn.
Also
a
lot
of
these
resource
types
may
not
be
available
on
a
stock
cluster,
so
you
may
need
to
install
k
native
or
an
operator
in
order
to
get
those
abstractions
into
your
cluster.
So.
D
D
Also
another
just
to
in
addition
to
that
also
another
methodologic
approach
right
so
with
serverless.
We
have
something
with.
We
have
this
function
that
exists
for
a
limited
limited
time,
so
we
are
in
charge
of
taking
the
control
on
the
input
and
taking
then
the
value
from
the
output.
But
all
those
functions
are
just
best
effort
executed.
D
So
it's
another
approach,
so
we
execute
we
can
execute
the
multiple
function,
multiple
application
in
this
serverless
ways
in
parallel,
but
we
don't
expect
too
much
real
reliability.
We
don't
expect
to
do
real
time
with
this
stuff
right.
So
it's
an
also
another
approach,
let's
say
on
demand,
reacting
to
events,
but
also
in
a
kind
of
a
best
effort.
So
it's
not
the
solution
for
any
use
case,
but
it's
really
really
important
in
modern
workloads
like
if
you
think
about
iot
or
multi-cloud.
D
It's
very
it's.
It's
terrific
important
to
deal
with
a
framework
and
pattern
like
in
serverless.
B
Yeah-
and
we
have
a
for
folks
who
are
looking
to
get
up
to
speed
on
this
topic.
I
have.
Let
me
see
where
I
have.
I
have
a
link
to
a
past
openshift
commons,
video
that
I'm
going
to
drop
into
chat.
So
there's
a
there
was
a
open
shift.
Commons
briefing
on
the
k
native
project
featuring
a
couple
of
the
upstream
contributors,
paul
mori,
roland
huss,
matt,
moore
and
scott
nichols
so
kind
of
a
cross
company
collaboration,
a
lot
of
folks,
not
just
red
hatters.
B
On
this
commons
briefing,
I
just
watched
part
of
that
this
morning,
really
good
content
there.
If
you're
looking
to
get
up
to
speed
on
k-native
and
learn
kind
of
what's
involved,
I
I
definitely
recommend
taking
a
look
at
that
video.
A
Absolutely
so
jp
dade
brings
up
a
great
kind
of
like
conceptual
question.
For
us
in
chat.
You
know
like
k.
Native
is
serverless.
A
What
is
the
the
open
shift
or
kubernetes
comparison
to
lambda
does
does
ocp
have
a
fazz
function
or
function
as
a
service
feature,
and
that's
kind
of
like
a
good
line
of
understanding.
Right,
like
fazz,
is
and
isn't
serverless
all
at
kind
of
the
same
time.
Right
like
ryan,
you
wanna
or
natalia
you
wanna
dive
into
that
a
little
bit
or
whoever
wants
to
dive
into
that
feel
free.
B
Yeah,
if
no
one
else
is
going
to
jump
I'll
I'll,
I
mean.
B
I
was
just
gonna
say
this:
video
totally
covers
this
topic
and
I
would
not
have
been
able
to
elaborate
the
differences
between
functions
as
a
service
and
k-native
serving
if
it
were
not
for
this
openshift
comments
briefing
I
mean
I'm
sure
I
could
have
found
this
information
in
plenty
of
other
places,
but
they
do
a
great
job
of
kind
of
splitting.
You
know
what
belongs
on
this
side
of
the
line
on
the
functions
as
a
service
and
what
belongs
on
the
k
native
serving
side
of
the
equation,
and
so
based
on
that
video.
B
I'm
we'll
attempt
to
summarize
for
for
that
group,
but
I
think
the
difference
was
k-native
serving
provides
a
lot
of
the
underlying
abstractions
on
the
kubernetes
api,
but
it
doesn't
make
it
doesn't
have
functionality
for
automatically
spinning
up
a
service
when
it
gets
external
traffic
from
outside.
That's
something
that
might
need
to
be
added
as
part
of
your
platform
technology
or
you
may
have
to
kind
of
code
that
up
as
as
part
of
your
solution,
set.
B
Other
features
that
are
in
that
group
of
like
not
fully
supported
or
not
fully
included
in
k
native,
are
things
like,
I
guess
I
kind
of
said
scaling
from
from
zero
to
one,
but
not
just
one
but
scaling
up
to
maybe
a
hundred
replicas
or
however
many
you
need
based
on
the
external
traffic
demands
and
then
also
scaling
back
down
to
zero
and
idling
the
service
when
it's
no
longer
needed,
so
that
that's
one
example
of
something
that
is
pretty
common
in
a
functional
or
function
as
a
service
or
that
platform
as
a
service
style
solution,
but
usually
that
function
as
a
service
is
a
little
bit
more
higher
level
and
a
little
bit
closer
to
a
paz
style.
B
Functionality
and
kubernetes
is
kind
of
by
by
scope,
trying
not
to
be
a
full-blown
path.
It's
kind
of
leaving
those
details
to
be
implemented
by
higher
order
solutions,
so
you
can
get
a
lot
of
it
via
k-native,
open
shift.
If
you
have
the
k-native
or
server-less
operators
installed,
openshift
can
provide
a
certain
amount
of
that,
and
so
we
should
see
some
of
that
in
the
demo
today.
Cool.
A
D
I
was
just
adding
that
different,
because
there
was
the
question:
what's
the
difference
between
lambda
air,
openshift,
serverless
or
canadian,
so
that
bit
of
lambda
is
the
function
as
a
service
that
ryan
was
mentioning
and
the
openshift
serverless
is
going
to
have
also
that
layer
with
a
sub
command
inside
the
kn
cli.
So
you
can,
you
will
do
kn,
f,
a
f
a
s,
and
you
can
launch
a
function
from
a
source
code
base.
You
can
create
a
container
and
running
it
in
a
serverless
way,
so
the
similar
piece
to
aws
lamp.
D
That
would
be
that
one.
But
as
ryan
mentioned,
that
k,
natives
is
kind
of
a
backbone
of
serverless,
while
function
as
a
service
is
something
on
top.
But
we
are
going
also
to
put
this
something.
On
top
and
connective
is
agnostic
right.
It
can
talk
with
many
functions
as
a
service
as
a
plugin
in
we're
going
to
add
this
bit
with
knfaas
to
have
this
function
as
a
service
inside
k
native.
B
Cool
another
detail
that
has
architecturally
been
kind
of
moved
out
of
bounds
for
k-native
is
the
ability
to
build
your
source
code
into
a
container
image
early
way
early
on
in
the
k-native
story.
That
was
a
kind
of
in
scope
or
something
you
could
do
using
k-native.
B
Now,
a
lot
of
that
functionality
has
migrated
over
to
techton,
and
so
you
can
run
your
build
in
tecton
and
then
have
techton
hand
off
to
k
native
to
serve
up
the
resulting
container
image.
B
So
we're
trying
to
use
the
appropriate
upstream
abstractions
with
a
kind
of
community
of
maintainers
around
it
and
not
duplicate
functionality
that
exists
in
other
parts
of
the
cloud
native
ecosystem.
A
B
Another
interesting
piece
that
is
really
an
attempt
to
align
with
upstream
needs
k
native
earlier
on.
In
order
to
do
kind
of
traffic
splitting
between
multiple
k-native
services.
You
would
have
to
install
istio
as
a
requirement.
B
A
lot
of
that
is
kind
of
behind
a
compatibility
layer
where
you
can
plug
into
istio
and
use
istio
for
traffic
splitting,
but
there's
also
several
other
kind
of
traffic
shaping
providers
they're
working
on
support
for
ingress
v2.
I
think
that's,
I
think
alpha
or
beta
currently,
but
that's
in
progress,
but
there's
a
variety
of
kind
of
traffic.
Splitting
implementations
and
istio
is
no
longer
a
strict
requirement.
You
could
use
it,
but
it's
not
a
requirement
of
k-native.
D
Right
and
the
name
of
the
new
ingress
for
doing
this
is
courier.
This
is
the
new
tool
they
use
instead
of
insta.
So
issue
is
not
a
requirement
and
you
you
go.
You
can
use
courier
out
of
the
box
to
to
have
this
functionality
for
for
getting
the
traffic
inside
your
serverless
application.
B
And
okay,
so
hopefully
you
all
can
see
a
desktop.
I
have.
I
have
opened
up
a
I'm
going
to
post
in
a
link
into
chat
in
case.
Anyone
else
is
interested
in.
B
Joining
so
we're
at
learn.openshift.com,
slash
developing
on
openshift
serverless,
we've
got
hyphens
between
that
developing
on
openshift,
but
you
should
be
able
to
find
it
from
learn.openshift.com
yeah.
We
have
a
whole
serverless
kind
of
folder
there
with
a
lot
of
different
topics
in
there
there's
an
introductory
server
list,
which
is
what
I'll
cover
today
and
then
there
are
a
couple
more
sections
covering
camel
k
which
gives
you
some
nice
extensions
to
the
eventing
solutions
in
serverless.
B
So,
let's
see
this
says
estimated
time.
30
minutes
we'll
see
how
long
it
takes
us
and
if
you
notice
any
issues
with
the
content
as
we're
going
through
it.
Let
us
know
I'm
we're
in
the
constantly
in
the
progress
of
updating
this
content
and
making
sure
that
it
is
relevant
recent
accurate.
This
is
currently
using
openshift
4.4
for
this
scenario.
B
So
it's
not
our
absolute
latest,
but
it
should
still
cover
a
lot
of
the
concepts.
B
So
I'm
going
to
hit
start
scenario,
oh
goodness,
capacity
limit.
Let
me
reload
and
see.
B
Got
a
capacity
limit
warning
yesterday
as
well
on
some
other
scenarios,
but
then
this
one
magically
worked
for
me.
So
I
guess
I
lucked
out
yesterday
looks
like
I've
got
a
session
available.
This
does
take
a
couple
minutes
to
start
up
what
the
work
that
it's
doing
in
the
background
is
installing
the
serverless
operator.
B
B
So
this
is
kind
of
the
steps
you
would
need
to
do
as
an
admin
in
order
to
make
openshift
the
openshift
serverless
operator
available
to
developers.
B
Oh,
this
is
looks
like
it's
just
one
name
space
for
this
example.
But
oh
no,
let's
see
all
all.
B
Yeah
yeah
that
does
make
sense
yep,
so
another
detail
I
picked
up
from
that
openshift
commons
summary
is:
if
you
are
using
a
kubernetes
cluster
that
uses
namespaces
as
part
of
its
role-based
access
control
and
part
of
its
multi-tenancy,
then
k-native
works
pretty
decently.
B
Well,
if
you're,
using
that
type
of
analogy
to
do
multi-user
support,
so
you
can
have
use
namespaces
as
buckets
for
your
k
native
stuff
and
then
and
then
associate
people
with
the
rbac
rules
and
hopefully
that
splits
up
access
control
in
a
reasonable
way.
B
B
Next
step
is
to
log
in
as
a
developer
and
create
a
new
project,
so
I
should
be
able
to
here.
We
go
tutorial
ready,
let's
try
oc,
who
am
I
currently
logged
in
as
a
developer,
cool
and
oc
project.
B
B
Yeah,
so
what
we're
going
to
do
in
this
section
is
deploy
our
first
serv
open
service
k
native
service,
not
kubernetes
service,
so
we're
kind
of
reusing
the
word
service,
but
hopefully
we've
set
the
context
appropriately
configurations,
revisions
and
routes
will
be
set
up,
will
scale
to
zero
and
oh,
it
should
automatically
scale
to
zero
when
we're
no
longer
contacting
the
service
nice.
B
The
openshift
dashboard
provides
a
lot
of
nice
visualization
for
this
as
well.
We
could
look
take
a
quick
look
at
the
schema
for
actually
this
looks
like
a
plane
service,
but
the
api
version
is
in
serving
k.
Native
dev,
I
was
kind
of
expecting
kind
service
would
would
be
the
official
service
resource
type,
but
I
think
this
api
version
distinguishes
it.
B
Cool,
so
we
can
run
kn
is
the
command
line
tool
that
you
can
use
for,
interacting
with
all
the
k-native
resources.
You
can
also
use
coop
ctl,
but
kn
gives
you
a
lot
more
specific
functionality
for
k
native
right.
D
Usually,
usually
in
our
presentation,
we
put
the
difference
between
what
is
the
difference
between
writing
a
service
and
writing
a
deployment
and
other
kubernetes
stuff.
So
usually
we
present
it
also
as
a
way
to
make
a
shorter
yaml
file
to
define
multiple
things,
of
course,
in
the
serverless
way,
but
it's
also
a
way
to
have
a
kind
of
a
shorter
infrastructure
as
a
code
for
your
services.
D
If
the
service
itself
is,
as
ryan
mentioned,
it's
a
a
specific
api
definition
for
k
native,
but
under
the
hood,
then
implements
the
deployment,
the
service,
the
horizontal
code,
auto
scaler.
So
it's
also
a
way
to
write
less
yaml.
If
you
want
to
follow
this
serverless
part-
and
I
found
also
one
link
to
this
to
so-
maybe
it's
more
clear-
let
me
check
I
can
share
in
the
chat
here.
We
go.
D
Yes,
I
sh
just
sharing
the
chat
presentation
we
made
with
brian
in
the
cloud
native
italy
event.
We
present
the
general
general
way
what
is
k
native
and
in
this
specific
slide,
I
linked
it.
There's
the
reference
between
a
service
and
what
the
service
does
under
the
hood.
It
created
the
deployment,
the
result,
auto
scaler
the
service,
so
from
20
from
70
lines,
you
come
up
to
22
lines,
which
is
cool.
You
know,
if
you,
if
you
follow,
if
you
want
to
put
follow
this
serverless
part.
B
Nice
so
it
looks
like
we
should
have
our
initial
serverless
solution
deployed
and
available.
I
ran
a
curl
example
command
against
the
resulting
generated,
auto-generated
url,
and
we
got
a
response
back
and
I'm
guessing
that
this
first
number
is
probably
a
hash
representing
that
initial
service
revision.
B
It
looks
looks
like
the
same
to
me:
yeah
well,.
A
B
We'll
see
whether
my
auto
shut
down,
I'm.
A
A
Still
the
same
like
uid
string,
so
I
guess
it's
just
unique
to
the
app
or
whatever.
B
B
Yeah,
so
this
is
hey
nice
yeah,
I'm
used
to
doing
oc
route
list
and
then
having
to
do
a
little
bit
of
explaining
to
upstream
users.
Why
I'm
using
oc
instead
of
coop
cuddle
but
kn?
I
have
less
explaining
to
do
so.
Kn
route
list
gives
you
your
your
routes
in
a
very
upstream
compliant
way,
we're
still
using
kind
of
open
shift
terminology
a
bit,
calling
it
a
route
so
hopefully
we're
clear
what
type
of
route
this
is
and
how
we
ended
up
with
it.
A
D
Content
is
partially
taken
from
the
great
william
marquito
dex
yeah,
adapted
for
47,
always
great
stuff
from
william.
B
So
it
looks
like
our
previous
image
was
tagged,
k
native
tutorial,
greeter
quarkus
was
the
image
tag
and
we
are
doing
a
roll
forward
or
update
to
go
to
the
latest.
D
B
B
I
was
kind
of
expecting
that
number
to
change,
but
maybe
it's
an
id
for
the
route
or
something
else
rather
than
the
the
revision
or
the
generation.
Maybe.
B
B
Okay,
so
we
ran
the
curl,
it
looks
like
we've
got
some
some
traffic
there.
B
Greeter
service
will
automatically
scale
down
to
zero
if
it
doesn't
get
a
request
for
90
seconds.
So
if
you
are
running
a
curl
or
some
type
of
refresh
on
your
browser,
if
everyone
holds
off
for
90
seconds,
I
should
be
able
to
rerun
this
oc
get
pods
and
we
should
see
it
automatically
scale
back
down
cool.
B
See
currently
at
two
of
two
and
it
looks
like
one's
currently
terminating
so
we're
already
downshifting
yeah
nice.
A
So
question
for
after
the
demo,
but
I'll
ask
it
now,
since
we're
waiting,
how
easy
is
it
to
modernize
existing
apps
to
serverless
right
like
what
are
the
constraints
that
usually
people
stumble
over
is
a
good
question.
D
It's
a
great
question
chris.
I
think
it's
also
hard
to
to
answer
yeah.
D
Maybe,
as
we
said
before,
don't
expect
that
your
application
is
it's
gonna
have
be
real
time.
If
you
have
any
real-time
application,
that's
not
gonna
work
because
the
scheduler,
the
internal
server,
let's
share
so
serverless
in
general,
as
this
say
paradigm,
your
function,
your
application,
runs
for
a
limited
time.
Let's
say
by
default
five
minutes
or
90
seconds.
If
you
don't
use
it
right,
so
don't
expect
your
application
to
be
critical
or
real
time.
D
It's
just
one
shot,
and
you,
if
you
have
this
key,
if
you
keep
this
in
mind
and
your
your
use
case
can
can
work
out
to
serverless,
then
it
to
be
honest
that
adapting
your
application
to
serverless
in
openshift.
It's
dramatically
easy.
It's
just
flagging
in
the
developer
console
hey,
my
application
is
serverless
and
that
becomes
server.
D
So
under
the
hood,
you
write
the
dev
console
right:
the
service
cr,
the
server's
ap
service,
api,
the
k
native
service,
so
it's
very
easy
to
write
or
deploy
an
application
serverless
that
is
difficult
to
understand
if
your
application
is
a
good
fit
for
serverless,
and
we
understood
that
your
if
your
application
is,
can
work
as
synchronously
independently.
D
A
The
the
biggest
thing
is
like
think
of
it,
as
you
know,
can
your
application
work
as
like?
If
this
than
that,
like
the
ifttt.com
thing
like
if
event
happens,
do
thing
event
happens
again,
do
thing,
another
event
happens.
There's
this
different
scenario.
Now
right,
like
you
have
to
you,
have
to
kind
of
break
your
app
down
to
the
point
where
it
can
just
say:
okay,
I'm
only
going
to
run,
and
it's
only
going
to
take
me
a
limited
amount
of
time
to
do
this,
one
run
and
off.
A
It
goes
think
of
it
as
like,
the
12
factor
app
right
like
yes,
there
can
be
state
involved,
but
it's
very
much
in
that
kind
of
like
execute
and
then
continue
kind
of
scenario.
A
B
I
think
part
of
it
depends
on
how
far
how
much
yaml
are
you
currently
dependent
on,
and
how
is
your
app
architected,
I
think,
yeah,
if
you've,
already
kind
of
architected
it
in
a
12-factor
style
functionality,
you
might
just
be
able
to
run
a
tecton
build
and
then
deploy
the
resulting
image
as
a
serverless
resource,
and
hopefully
it
it
just
works.
B
If
you
are
very
invested
in
helm,
charts
or
other
advanced
yamls,
you
might
not
be
able
to
stuff
all
of
those
yamls
inside
k-native.
You
might
need
to
re-adapt
some
of
your
yamls
to
be
k-native.
You
can
use
the
kn
command
line
to
help
generate
those
initial
yamls,
and
then
you
can
store
those
yamls
in
a
helm
chart
but
helm
doesn't
have
quite
the
same
support
for
traffic
shaping.
B
So
this
section
should
be
just
about
done.
I
tried
logging
into
the
dashboard
using
the
developer
credentials
and
had
trouble
finding
the
namespace.
So
I'm
going
to
need
to
test
that
again.
I
thought
last
time
I
tried
it.
It
worked
correctly.
So
I'm
not
sure
if
this
is
just
a
bug
in
catacota
or
or
something
else,
but
we'll
we'll
step
through
the
next
couple
steps
and
and
see
how
far
we
get.
B
I
was
able
to
log
in
as
an
admin
but
then
didn't
see
a
lot
of
the
resources
I
was
expecting
to
see.
So,
let's
see
this
next
section
should
cover
traffic
distribution
and
blue
green
deployment.
Nice.
B
I'm
curious
how
many
folks
in
chat
are
really
involved
from
a
development
perspective
and
if
so,
are
they
managing
their
own
blue
green
deployments,
or
are
they
just
handing
off
to
a
build
and
ci
suite
and
then
that's
the
end
of
the
road
for
them?
I
know
in
in
my
past
experience
I
was
generally
kind
of
a
developer
and
a
sre
at
the
same
time
develop
developers.
B
So
that's
what
I'm
used
to
I'm
curious
if
any
folks
in
chat
are
just
kind
of
hand
off
to
a
pipeline
and
that's
the
end
of
the
story.
If
so,
you
know
that
this
blue
green
may
be
out
of
scope
for
you,
but
this
is
really
useful
from
from
my
perspective,.
B
Most
of
the
traffic
splitting
that
I've
done
in
the
past
is
100
covered
by
what's
available
in
k
native,
so
now
that
I've
seen
all
the
traffic
splitting
that's
available
here,
I'm
kind
of
curious.
Like
do.
I
really
need
istio.
It
hasn't
been
proven
to
me
as
a
developer
that
that
I
need
all
of
that
traffic
splitting
support,
but
I
wasn't
doing
mixing
and-
and
I
wasn't
making
full
use
of
all
the
features
that
istio
tries
to
provide.
B
So
here
we
can
run
an
upgrade
update,
we're
going
to
set
the
revision
name
to
greeter
v2
and
set
this
environment
key
here
cool
the
revision
list
shows
we've
got
a
v1
and
a
v2
available.
We
can
run
another
update.
It
looks
like
we
are
setting
greeter
v1
to
current
and
greeter
v2
to
previous
and
then
setting
a
latest
tag
as
well,
and
a
hundred
percent
of
the
traffic
in
this
case
is
going
to
go
to
greeter
v1.
B
A
So
what's
cool
is
I
just
checked
the
headers
on
that
request
right,
so
it
shows
up
as
it's
an
envoy
upstream
service
type,
and
there
is
a
cookie
involved
if
you
want
it
and
yeah
like
this.
It
looks
like
it's
coming
from
just
any
old
web
server,
but
it's
got
a
couple
extra
headers
for
envoy.
B
And
it
just
says:
hi
greeter:
I
thought
that
there
was.
We
were
setting
an
environment
variable,
so
I
was,
I
guess,
we're
on
v1.
Currently
I
didn't
check
the
response
earlier
when
we
were
on
v2,
but
you
should
be
able
to
roll
between
v1
and
v2
change.
The
the
traffic
allocation
and
off
you
go
so
canary
releases
is
this.
Is
this
was
one
of
the
key
capabilities
that
I
relied
on
as
a
combo,
developer
and
sre?
B
I
would
always
deploy
my
code
we'd
use
I
used
to
use
cookies
and
to
to
basically
do
the
the
traffic
splitting
and
if
you
were
cookied
one
way,
you'd
go
down
one
path
and
if
you
were
cooking
a
different
way,
you'd
go
down
a
different
path
and
I
could
set
cookies
on
all
the
incoming
traffic.
B
I
would
have
a
special
cookie
id
that
I
could
set
just
for
myself
using
my
javascript
console
or
other
things
like
that,
and
then
that
would
allow
me
to
go
down
and
access
a
sir
like
a
solution
that
was
published,
but
not
given
any
percentage
of
the
traffic.
B
So
that
was
how
I
would
do
kind
of
canaries
in
the
past,
and
then
I
could
ping
my
service
verify
that
it
was
running
correctly,
maybe
even
run
some
test,
automation
against
it
in
production
to
ensure
that
it
was
fully
functional
and
then
I'd
dial
up
the
traffic
once
I
was
somewhat
confident
that
it
was
working
as
I
expected
in
in
production.
So
this
will
show
you
how
to
step
through
all
of
those
hoops
here.
D
What's
interesting
is
that,
together
with
this
in
openshift,
it
looks
like
we
have
three
way
to
do
a
b
testing
traffic
splitting
canary,
so
we
have
the
routing
system
general
routing
system
or
then
we
have
a
istio
service
mesh
and
we
have
also
serverless
and
the
serverless
one
is
more,
maybe
more
smart,
because
your
application
is
not
active
until
you
call
it.
So
it's
not
running
you
are
saving
resources.
D
You
are
saving
your
application
for
consuming
resources
until
you
invoke
it
through
a
route
through
a
url.
So
maybe
this
is
the
smartest
way
to
to
deal
with
revision
canary.
B
Yeah,
I
think
it's
pretty
cool
that
you
can
basically
always
have
every
version
live
in
the
system
to
some
extent
and
then
update
your
traffic
splitting
rules
to
either
expose
or
hide
those
services
or
have
them
shared
behind
a
unique
url,
so
yeah.
This
gives
me
pretty
much
all
I
was
hoping
to
get
from
istio
without
having
to
take
on
the
added
complexity
of
learning
all
the
istio
crds.
B
So
I
I
need
to
do
more
research
into
istio
now
to
see
how
it
compares
and
what's
the
potential
value
for
me
as
a
developer,
but
this
has
pretty
much
all
I
need
as
far
as
traffic
routing
for
my
basic
use
cases
at
least
nice.
That's
awesome.
B
C
B
B
B
We've
got
something
running
here,
but
I
hit
some
kind
of
issue
with
my
with
my
shell
earlier,
so
I'm
not
sure
how
many
of
these
commands
actually
pasted.
Let
me
see
if
I
can
find
the
example
for.
B
Showing
the
different
showing
that
let's
do
kn
service.
B
Let's
try
that
cool
okay
great.
So
this
is
what
I
was
hoping
to
see.
We've
got
a
v1
and
a
v2.
Currently
we've
got
a
hundred
percent
to
current
and
there's
these
other
tags
latest
and
previous
that
you
can
manipulate.
B
So
more
examples
here
we're
running
almost
to
the
end
and
when
I
have
one
more
set
section
to
go
through
here,
so
this
is
the
scaling
section.
This
is
going
to
talk
about
or
just
demonstrate,
scaling
to
zero.
Why
that's
important
understanding
the
grace
period
for
scaling
to
zero?
That's
customizable,
setting
auto
scaling
strategies,
concurrency
based
auto
scaling,
a
minimum
number
of
replicas
setting
up
a
horizontal
pod,
auto
scaler,
so
you
can
do
more
advanced
scaling.
B
Lots
of
nice
options
in
in
this
section
and
the
openshift
dashboard
is
going
to
show
a
lot
of
really
cool
visualizations
to
go
along
with
this
as
well.
Assuming
it
it
loads
up
everything
for
you,
I'm
going
to
give
that
dashboard
one
more
shot
and
see.
If
I
see
any
of
this
content,
I'm
still
getting
the
catalog
and
it's
saying
no
workloads
found.
So
that's
I
have
an
odd
situation
going
on
with.
Maybe
I'm
in
the
wrong
name
space
I
think
serverless
dash
tutorial
was
the
right.
B
D
Yeah
absolutely,
for
instance,
if
you
have
a
kafka
cluster,
you
can
connect
the
kafka
messages
to
your
application.
Just
dropping
a
line
from
the
kafka
event
point
to
your
application.
Under
the
hood,
there
is
going
to
be
a
k
native
eventing
api
like
the
serving
but
for
for
for
evans,
and
it's
going
to
be
automatically
done
by
the
openshift
web
console.
So
this
user
experience
has
improved
a
lot
and
it's
much
much
easier
to
prepare
your
serverless
workloads
in
terms
of
serving
and
eventing.
B
There's
also
a
lot
a
wide
variety
of
eventing
kind
of
sources
and
and
sinks
available
via
camel
k.
I
have
not
learned
very
much
about
camel
k
as
of
yet,
but
I
know
we
do
have
several
learning
scenarios
focused
on
camel
k
and
the
additional
eventing
types
that
are
made
available
through
through
that
solution.
B
So
definitely
if
you're
interested
in
learning
more
advanced,
eventing
and
integration
around
eventing
definitely
take
a
look
at
those
follow-on
scenarios
that
come
up
after
this
one
is
anyone
else
currently
going
through
this
scenario,
any
folks
in
chat?
Let
me
know
if,
if
you
saw
something
dramatically
different
than
what
I
saw
when
you
opened
up
your
openshift
web
console.
B
So
I
have
so
far
created
a
new
service
set
up
some
max
scale
and
a
couple
other
modifications.
B
This
last
example,
I
think,
generates
prime
numbers
and
you
can
input,
I
think,
kind
of
a
seed
value
and
it'll.
Tell
you
what
the
next.
I
think,
what
the
next
prime
is
following
that
interesting.
B
Yeah
it
let's
see
so
I
have
reported
a
bug
about
this.
Do
you
see
in
the
console
I'm
getting
an
error
message
back
after
running
this
hey
command,
and
so
in
the
in
this
you
could
see
it
says:
xml
version
anonymous,
caller
does
not
have
storage
objects.
B
I
believe
that
this
this
is
actually
a
bug
related
to
catacota.
So
if
you're
running
this
on
your
own
local
cluster,
you
can
copy
and
paste
these
commands
into
a
local
code,
ready
workspaces
instance
or
minikube
or
or
other
kubernetes
offerings,
rather
than
just
having
it
paste
across
into
the
examples
or
into
the
embedded
shell.
So
hopefully,
if
you
copy
and
paste
this
into
your
own
cluster,
you
won't
see
this
error
message
from
the
hey
command.
You
will
need
to
install
the
hey
command.
I
guess,
if
you
don't
have
it
already.
B
But
this
is
actually
a
a
error
with
catacota
and
I've
already
reported
a
bug
so
no
need
to
report
it
I'll.
Hopefully,.
B
A
A
B
Yeah,
so
that
should
get
you
through
the
end
of
the
serverless
session.
Here,
there's
several
more
examples:
you
can
click
on
the
more
scenarios
button
at
the
end
and
that
should
let's
see
I'm
going
to
go
back
to
the
main,
learn
see.
If
I
can
find
here,
we
go
open
shift
server
list.
So
there's
a
whole
section
here,
there's
getting
started
and
then
these
other
five
all
kind
of
dive
into,
or
at
least
four
of
them
dive
into
camel
k,
use.
D
Cases
apache
camel
is
a
popular
open
source
project
and
camel
k
is
the
serverless
version
of
this
project.
It's
pretty
new,
but
it's
having
a
good
momentum
because
it
helped
you
defining
what
is
called
a
camel
route.
So
those
are
routes
which
are
virtual,
like
an
apache
kind
of
apache
controllers,
but
made
with
a
camel
with
a
dsl
language
that
abstracts
you
for
connecting
multiple
point.
Multiple
source
can
be
a
database,
a
queue
and
you
connect
this
to
another
cluster
to
another
kafka
partition.
D
So
it's
an
abstraction
around
this
eventing
part,
and
it's
it's
really
cool.
I
I
I
suggest
to
try
it
out
to
the
scenarios
it's
something
that
is
having
a
great
moment.
Yeah.
A
Yeah,
no,
to
be
sure,
we've
talked
about
camel
okay,
a
couple
times
on
the
channel.
B
Well,
I
posted
in
one
last
link
to
our
topic
survey.
If
you
have
topics
that
you
would
like
to
see
coming
up,
one
of
our
suggestions
that
we
had
in
there
was
how
to
debug
and
view
logs
with
serverless
apps,
so
we
might
hit
that
up
in
the
future
or
you
can
ask
the
the
hosts
on
the
next
session.
There.
B
Yeah
thanks
again
for
your
feedback
in
chat.
Let
us
know
if
there
are
other
topics
that
we
should
hit
on
this
show
in
the
future,
and
I
think
that's
it.
A
For
us,
yeah
awesome
great
work
today,
natalie
and
ryan.
Thank
you.
So
much
for
joining
us
here,
like
I
mentioned,
stay
tuned
for
the
next
show.
That's
coming
up
here
in
just
a
few
minutes,
we'll
be
talking
about
serverless
functions
with
the
serverless
functions.
Experts
apparently
yeah.
Lance
ball
is
a
serverless,
a
genius
here
to
say
the
least.
But
yes
thank
you
all
for
joining,
and
I
will
see
you
all
very
soon.