►
Description
OpenShift Commons Briefing
What's New in OpenShift 4.4 for Developers
Jan Kleinert, Brian Tannous, Joel Lord (Red Hat)
2020-05-28
hosted by Diane Mueller (Red hat)
B
Dan
so,
as
Diane
mentioned,
you've
got
a
few
members
of
the
openshift
developer.
Advocates
team
here,
Brian,
Joel
and
I
will
be
demonstrating
some
of
the
different
features
that
were
added
in
OpenShift
for
four
that
are
primarily
focused
at
developers.
I
believe
J
Dobies
is
also
going
to
be
joining
us
and
he
may
be
participating
in
some
of
the
commentary
and
questions
as
well.
So
I
will
just
go
ahead
and
get
started.
B
I'm
gonna
be
covering
some
of
the
updates
to
the
developer
perspective
and
the
developer
Catalog
I'm
out
of
about
a
full-screen
mode
here
and
hop
right
over
into
the
developer
perspective
in
the
web
console.
So
if
you're
not
familiar
with
the
web
console
in
OpenShift
4,
there
are
these
two
perspectives.
By
default.
You
often
will
land
in
the
administrator
perspective,
but
you
can
toggle
over
to
the
developer
perspective
here.
B
This
has
been
around
since
I
believe
for
two,
but
there's
been
a
lot
of
features
that
are
added
to
make
application
deployment
even
easier
in
four
four.
So
these
include
developer
catalog,
updates
to
allow
developers
to
filter
and
group
items
in
the
catalog
labels
to
visually
distinguish
item
types.
I'll
show
you
all
that
in
a
moment
we
also
have
operator
back
to
services
in
the
developer
Catalog
now
and
that
allows
developers
to
run
a
variety
of
workloads
that
are
installed
and
managed
by
kubernetes.
Operators
will
also
look
at
home
3
a
little
bit
as
well.
B
So
let's
go
ahead
and
get
started.
We
look
at
the
had
a
log
here.
This
will
open
up
the
developer
catalog
and
you
can
see
that,
in
addition
to
the
kind
of
the
filter
options
that
we're
here
to
begin
with,
we
now
can
filter
by
type
right
now.
I
have
all
of
the
items
available,
but
let's
say
that
I
just
wanted
to
look
at
builder
images
and
operator
back
services.
You
can
toggle
here
check
these
checkboxes
on
and
off
to
narrow
down
the
list
of
items
that
are
available.
B
You
are
looking
at
the
operator
back
to
services,
of
which
we
happen
to
have
nine
here
installed
on
this
cluster.
You
can
see
that
there's
also
this
group
by
drop-down
menu.
So
if
you
choose
group
by
operator
what
this
will
do
is
just
kind
of
visually
clump,
together
the
operators
or
the
items
in
the
catalog
that
are
related
to
different
operators,
and
this
can
just
make
it
a
little
bit
easier
to
find
what
you're
looking
for
and
find
the
items
that
are
related
to
particular
operators.
B
Now,
if
I
install,
let's
see
I'm
going
to
install
this
one
CA
grapa
radar
here,
just
to
show
you
what
that
process
looks
like
also,
you
may
have
to
bear
with
us.
We
were
having
there's
some
issues
going
on
with
Quay
at
the
moment,
which
may
cause
us
to
have
some
problems
deploying
certain
things
depending
on
if
it
needs
to
pull
images
or
not.
So
when
you
are
installing
these
operator
back
services,
you
have
the
opportunity
to
manually
edit
the
sea
animal
here.
B
If
you
want
to
I'm
just
going
to
click
create,
and
then
that's
going
to
take
me
to
the
topology
view.
So
in
topology
view
you
can
see
that
these
are
visually
distinguished
as
being
operator
back
services.
The
little
o
here
stands
for
operator
backed,
and
it
has
this
dotted
rectangle
outline
around
it.
In
this
particular
case,
there's
only
one
item
in
in
this
block,
but
some
operators
may
have
multiple
components.
They
would
all
be
there
in
that
rectangle.
So
you
can
see
what
what
is
all
grouped
together
now,
if
I.
B
B
Looking
for
now,
helton
charts
were
added
to
the
developer
catalog
in
this
version
of
openshift
right
now,
the
home
charts
that
are
visible
are
coming
from
a
specific
repository
in
future
releases
of
openshift
you'll,
be
able
to
specify
which,
which
repo
of
Helen
charts
you
want
to
have
loaded
in
the
system,
but
I'll
go
through
the
process
here
of
showing
you
what
it
looks
like
to
install
the
helm
chart
from
the
developer
catalog.
This
is
a
no
js'
example
one.
B
So
when
I
click
in
there,
you
can
give
it
a
release.
Name
you
also.
This
is
from
the
the
value
CMO
file.
If
you
want
to
make
any
changes
here,
I'll
click
install
and
then
similar
to
what
we
saw
with
the
operator
of
X
services.
Helm
releases
are
distinguished
with
this
in
it
as
well,
and
you
can
click
on
here.
If
you
want
to
follow
the
builds
or
any
of
those
steps,
as
it's
getting
deployed,
you'll
notice
over
here
there's
also
a
link
for
helm
and
the
left
navigation.
B
That
is
also
new,
and
this
will
allow
you
to
see
all
of
your
helm
releases
here
now
home
3
has
been.
You
can
use
the
Humphrey
CLI
with
openshift
for
awhile
now
and
if
you
need
the
CLI,
and
you
don't
have
it,
you
can
get
to
it
here
under
command
line
tools,
you
can
go
here
to
download
the
home
through
CLI.
B
B
Stable
if
I
type
that
right
they
will
get
that
installing
for
us.
Okay,
it's
good!
So
now
I
can
run
home
list,
and
here
you
can
see.
Hopefully
you
can
see
both
the
example
MySQL,
one
that
we
just
installed
from
the
command
line,
as
well
as
the
nodejs
one
that
we
did
from
the
web
console
those
switching
back
over
here.
You
can
see
that
coming
up
as
well,
and
it's
also
here-
you
can
also
see
the
revisions
and
any
of
that
information.
You
can
click
into
it
to
see
more
all
right.
So,
let's
deploy.
B
B
Wasn't
paying
attention
so
I
added
that
into
this
application
grouping
here
for
Yaeger?
That's
not
what
I
actually
want
so
I'm
going
to
take
that
and
we
can
edit
the
application
grouping
so,
instead
of
having
it
grouped
with
Yaeger
I'm
going
to
create
a
new
application
grouping,
we'll
call
it
node
example.
B
That's
better
so
now
it's
sitting
here
in
its
own
application
grouping
which
is
helpful
to
you,
can
also
I
believe
shift-click
to
drag
this
in
and
around.
If
you
want
to
move
it
that
way
too.
So
what
I
wanted
to
show
you
next
was
these
connectors.
So
if
I
hover
over
this
item
here,
you'll
see
the
little
dotted
line,
connector
show
up.
You.
Click
on
that
I
can
use
that
to
make
visual
connectors
between
different
items,
and
that
is
exactly
what
it
says.
It's
just
a
visual
indicator
that
there's
some
connection
between
two
items.
B
In
certain
cases
you
can
create
a
service
binding
using
these
connectors.
But
here,
let's
say,
for
example,
these
two
components
communicate:
I
can
draw
drag
that
and
then
anyone
looking
at
topology
view
will
be
able
to
see
that
there's
some
association
between
those
two
you
do
that
by
mistake.
You
can
delete
it
pretty
easily
and
then
the
last
thing
that
I
want
to
show
you
here
with
the
developer
perspective,
is
adding
items
to
projects
or
applications
in
context.
B
B
This
time
we'll
create
our
new
application
right
from
here
and
click
create,
and
then
that's
created
here.
So
that
can
be
a
time-saver.
If
you
are
trying
to
add
something
into
a
project
or
application,
did
you
do
that
straight
from
topology
view,
I
think
that
was
most
of
what
I
wanted
to
show
you
basically.
In
summary,
these
are
some
new
features
to
the
developer
perspective
and
developer
catalog
to
make
browsing
and
finding
items
easier,
and
you
know
managing
and
adding
to
your
deployments
easier
from
the
topology
view.
C
So
Tecton
is
a
said:
communities
native
STI
CD,
as
you
guys,
you
can
read
it's
a
powerful
and
flexible,
open
source
framework
for
creating
CITV
systems
along
the
developers
to
build
test
and
deploy
across
cloud
providers
and
on-premise
systems.
What's
really
nice
about
Tecton
is
that
you
have
all
those
little
basic
building
blocks
and
you
can
build
your
your
big
pipelines
foresee
ICD
and
everything
is
ran
inside
your
of
your
kubernetes
cluster,
just
to
make
sure
we're
all
on
the
same
page
here.
This
is
what
a
Tecton
pipeline
looks
like.
C
Basically,
you
have
your
pipeline,
which
will
contain
a
bunch
of
different
tasks.
Tasks
can
be
run
either,
one
after
the
other
or
in
parallel,
depending
on
on
what
are
your
needs
at
the
moment,
and
each
task
has
one
or
more
different
steps.
Once
you've
created
those
pipelines,
you
can
actually
put
in
some
resources
and
tag
resources
on
it,
so
you
could
have
some
import
resources
and
outputs.
So
as
an
example,
you
would
have
a
git
repository
as
an
input.
C
So
you
have
all
of
that.
You
will
be
able
to
trigger
a
pipeline
run,
which
basically
is
the
execution
of
a
pipeline.
So
this
is
what
we're
going
to
take
a
look
at
to
build
all
of
your
different
tasks.
There
is
a
catalog
available
on
the
Tecton
CV
github
lunch
catalog,
and
you
can
see
that
there's
a
bunch
of
different
tasks
that
you
can
start
from
so
say
you
want
to
have
a
task
that
will
perform
something
with
the
openshift
client.
C
You
can
just
find
your
llamo
file,
which
is
right
here,
and
you
can
just
import
this
into
inside
your
open
shift
cluster.
You
can
also
use
the
version
1
alpha
or
beta
a
v1
better.
One
was
just
released
just
a
few
days
ago,
but
a
bunch
of
pipelines
still
uses
alpha.
I
think
this
is
gdb
changes,
sometimes
soon,
let's
take
a
look
at
our
pipelines
and
how
it
would
look
like.
C
So
if
I
want
to
take
a
look
at
my
cluster,
I've
got
a
brand
new
project
here
and
I
can
take
a
look
to
see
if
I
have
any
tasks
and
currently
I
don't
have
any
past.
So,
instead
of
importing
one
directly
from
the
catalog
I'll
actually
go
ahead
and
create
a
task
you
manually,
I
will
give
this
task
a
name.
So
we'll
call
it
the
hello
task
and
you
might
guess
what
it
will
do,
and
so
it
will
do
is
that
it
will
do
a
echo
and
we'll
use
hello.
C
Input
spans
name
and
it
will
input
whatever
we
pass
it
as
a
program,
we'll
probably
want
to
add
a
default
value
as
well.
So
we
can
just
add-
and
this
is
a
task
for
effect
on
so
we
have
it
right
here.
I
can
now
create
that,
and
now
that
I
have
a
task.
I
can
actually
go
ahead
and
create
a
pipeline
with
that
task.
So
if
I
go
in
pipelines,
which
is
part
of
your
navigation
bar
now,
I
can
create
a
new
pipeline,
we'll
give
it
a
name
we'll
call
it.
C
The
hello
pipeline
and
from
here
I
can
select
my
first
task.
That
I
want
to
do
so.
You'll
notice
that
I
have
a
bunch
of
pre-populated
tasks
that
were
defined
by
my
cluster
admin.
I
could
use
one
of
them
or
the
one
that
I've
just
created,
so
I'll
use
the
hello
tasks
for
now.
If
I
click
on
it,
I
can
see
all
the
different
details,
and
you
can
see
that
rameters
name
was
already
prefilled
with
world
because
that's
my
default
values
when
I
just
go
ahead
and
create
this
pipeline.
C
So
this
is
a
pipeline,
so
it
has.
One
big
pipeline
has
multiple
tasks,
while
only
one
in
this
case
and
each
one
of
those
tasks
could
have
multiple
steps,
but
that
one
only
has
one.
So
if
I
go
to
my
pipeline,
I
can
actually
go
here
and
start
this
pipeline
and
you
can
see
the
pipe.
The
task
is
not
running.
If
I
click
on
it,
I
can
see
the
laws.
I'll
have
to
be
really
quick,
because
that
shouldn't
take
too
much
time,
and
here
we
have
it,
so
we
can
see
that
it
was
successfully
completed.
C
This
was
check,
changed
to
a
check
mark
and
we
see
the
output
hello
world.
If
I
go
back
to
my
pipeline,
I
can
select
the
pipeline.
I
can
go
ahead
and
edit
edit
it
and
you
can
see
that
I
can
change
my
parameter
to
say
hello
Joel.
Instead,
when
I
said
those
little,
all
those
tasks
are
reusable,
you
can
very
very
easily
change
them
by
using
different
parameters.
So
if
I
go
ahead
and
trigger
this
pipeline
now,
well,
as
you
might
guess,
the
output
will
have
changed
a
little
bit.
C
C
If
we
go
back
to
our
pipeline
once
again,
there's
a
few
things
that
we
can
add
so
when
I
go
ahead
and
change
just
a
little
bit
and
I
forgot
to
mention,
but
you
can
easily
add
more
tasks
if
you
needed
by
just
adding
them
and
appending
them
to
the
to
the
pipeline.
So
you
can
really
decide
well
to
start
by
doing
your
hell
to
ask
and
then
do
some
maven.
C
You
could
do
some
other
output
and
then
use
the
build
up,
for
example,
now,
if
I
wanted
to
instead
of
using
and
typically
you
would
normally
use
parameters
as
part
of
the
pipeline
and
not
as
part
of
your
paths,
so
I
could
go
here
and
add
a
pipeline
parameter
and
I'll
use.
It
I'll
call
it
a
name
again
and
it'll,
be
the
person
to
greet
and
still
keep
world
as
a
default
value
here.
I
can
save
this
parameter
and
now
I'll
go
into
my
mo
and
I'll
change.
C
My
task
to
use
not
the
value
that
was
hard-coded
in
here,
but
instead
it
will
use
params
that
name,
but
this
will
lose
that
my
task
will
now
use
this
pipeline
parameter
and
append
that
and
use
that
as
a
value
for
my
full
pipeline.
So
now,
if
I
start
my
my
pipeline
again,
you
can
see
that
I
am
greeted
by
this
nice
little
modal.
It
says
person
to
greet
I
can
I
can
leave
it
at
the
world
for
now,
we'll
just
run
this
task.
C
This
can
be
very
useful
if
you
have
say
a
github
repository
with
most
multiple
branches,
so
each
one
you
want
at
each
time.
You
want
to
start
that
pipeline.
You
might
want
to
use
a
different
branch
or
some
sort
of
option
that
will
change
each
time
that
you
want
to
run
that
that
that
pipeline
we
can
easily
change
those
when
you
have
them
set
as
parameters.
C
Resources
can
be
used
and
kind
of
kind
of
in
the
same
way
when
I
go
ahead
and
create
an
a
little
bit
more
complex
pipeline,
we'll
look
at
something
a
little
bit
more
complex.
What
I
want
to
do
is
something
similar
to
this
pipeline
here.
So
I
want
to
start
with
a
git
repository.
I'll
use
a
task
to
create
an
image
out
of
this,
and
then
I
will
output
an
image
that
I
can
then
deploy
in
my
internal
registry.
C
Can
go
back
my
pipelines
and
I'll
create
a
new
one,
so
we'll
call
it
our
our
deploy
pipeline
and
now
we'll
use
our
s.
I
know
J
s.
This
is
no
js'
application
that
I'm
going
to
use
you'll
notice
that,
in
my
pipeline
builder,
I
have
this
little
exclamation
mark
telling
me
that
some
things
are
not
ready.
Some
some
required
fields,
I
haven't
been
filled
and
that's
because
I
don't
have
any
resources
available
right
now,
so
I'll
need
to
start
by
adding
resources.
C
C
Then
I
can
go
back
here.
I
can
specify
the
git
repository
that
I'm
going
to
use,
as
well
as
the
image
name.
So
you
could
have
multiple
tasks
that
need
multiple
git
repository.
In
this
case
it
happens
that
I
only
have
one
task,
and
it's
only
using
that
repo
I'll
just
need
to
change.
This
is
the
false
as
well
and
we're
going
to
leave
it
at
a
node
8.
It
doesn't
really
matter
for
this
demo
and
once
again
we
have
our
pipeline,
so
everything
is
ready.
C
C
Now,
as
I
mentioned,
we've
been
having
a
few
issues
with
one
of
our
servers
right
now.
So,
let's
see
if
this
will
actually
work,
though
I
can
go
ahead
and
start.
My
deploy
pipeline
and
you'll
notice
that
non
I'm
being
asked
to
fill
in
those
fields
because
I
never
specified
the
resources,
so
I
can
actually
create
my
resources
on
the
fly
and
so
I'll
tell
it
to
use
github
com.
I
will
use
the
software
collection,
node.js
application,
also
known
as
the
gorg
Borg,
no
chance
X
perfect.
C
C
C
C
Let's
just
cross
our
fingers
and
hope
that
this
might
work.
It
seems
like
it's
taking
a
little
bit
too
long.
So
what
would
happen
is
that
it
would
actually
create
an
image
now
and
I
would
be
able
to
create
my
application
specified
that
I
will
take
a
image
from
my
image
stream,
which
I've
created
earlier,
so
the
demo
app
and
once
it's
actually
been
deployed.
I
would
have
a
you
latest
tag
here.
That
I
would
be
able
to
use
to
create
my
application.
I
would
then,
and
give
it
a
name.
C
C
This
one
is
still
running:
oh
something's
happening.
Look
at
that,
let's
actually
wait
for
it
to
happen.
So,
as
you
can
see,
we
have
all
the
different
steps
that
are
happening
right
now,
so
we're
creating
the
image
it
pulled,
the
gift
source
files.
It
then
generated
the
docker
file.
It's
now
building
the
actual
image.
C
D
C
C
And
there
it
is
perfect,
so
it
actually
successfully
completed
and
I
can
go
ahead,
create
my
application,
we'll
use
an
image
stream,
as
you
can
say,
now,
have
the
latest
tag
that
I
can
use.
Let's
keep
all
the
defaults
and
we'll
create
a
route
for
this,
and
we
have
our
application
and
in
just
a
few
seconds,
we'll
see
it
being
deployed,
but
I
never
actually
created
that
image.
Everything
was
taken
care
of
by
my
pipeline
and
I
can
now
access
our
demo
application
here.
C
If
I
were
to
start
this
pipeline
again
no
the
deploy
pipeline,
we
could
start
the
last
round,
so
it
will
use
the
same
defaults
once
again,
so
it
will
use
the
same
github
repository
as
well
as
the
same
image
to
be
or
the
same
image
stream
or
same
image
name,
so
it
will
be
published
in
the
same
image
stream.
Then
you
would
actually
do
that
application
being
redeployed
as
soon
as
this
task
is
completed
now.
This
was
also
a
relatively
simple
task,
so,
with
the
pipeline
builder,
there
were
some
many
things
that
you
can
do.
C
Why
don't
I
go
ahead
and
I'll
actually
use
this
deploy
pipeline
and
we'll
just
tweak
it
a
little
bit?
So
in
most
cases,
you
probably
won't
want
to
just
Ematic
aliy
deploy
your
application
as
a
developer.
You
probably
have
some
sort
of
process.
Well,
you
probably
have
some
some
tests,
some
unit
tests
that
you
want
to
run.
You
probably
have
some
auditing
security
auditing
that
you
want
to
do
so.
I'll
just
go
ahead
and
create
a
few
tasks
that
I
already
have
in
here.
I'll
just
need
to
copy
and
paste
them.
C
I'm
back
so
I
have
two
thefts,
so
this
is
a
node.js
application,
so
I'll
use
NPM
to
run
an
NPM
audit
just
to
make
sure
that
we
don't
have
any
security.
Our
vulnerability,
isn't
there.
Let's
just
create
this
task
and-
and
my
second
one
we'll
run
NPM
run
test
which
will
run
all
of
my
unit
tests
for
my
project.
But
I
can
go
back
to
my
pipeline.
Let's
just
see
where
it
is
to
see
if
we
can
actually
see
the
application
being
deployed.
C
Remember,
I
just
triggered
that
that
pipeline
as
soon
as
as
the
first
one
was
completed
as
soon
as
this
one
was
deployed,
but
you
should
see
it
in
just
in
the
next
few
seconds.
It'll
have
redeployed
a
new
version
there.
It
is,
but
now
push
to
the
internal
registry.
So
we
can
automatically
redeploy
our
applications.
That
was
actually
very
fast
and
it's
the
same
version
as
before
same
source.
Coax
I
didn't
change
anything
in
the
meantime,
but
you
saw
that
it
really
deployed
the
application
automatic.
C
So
I
was
saying
you
probably
have
some
sort
of
different
processes
in
place,
so
we
can
go
back
and
we
add
our
npm
test
to
make
sure
that
all
of
our
tests
are
fast
and
we
can
also
run
in
parallel,
because
we
don't
necessarily
rely
on
the
NPM
tests
to
run
the
NPM
audit.
So
we
can
both
run
the
testing
and
the
auditing
at
the
same
time,
just
to
save
some
time
on
the
build.
C
C
So
if
I
look
at
my
pipe
once
again,
you'll
notice
that
the
status
here,
instead
of
being
all
green
we've
got
one
failed
task
and
the
other
one
was
cancelled
because
we've
got
a
task
that
failed
before
it
actually
canceled
the
next
tasks
in
line.
So
that's
why
this
one
was
never
triggered,
but
now
at
least
we
have
our.
We
have
some
vault
in
liberties
in
our
code,
but
this
was
actually
never
released.
So
we
can.
We
do
those
changes
before
we
deploy
a
new
application
and
that's
pretty
much
it
there's.
C
There,
it
is
the
last
thing
I
really
liked
about,
and
this
was
just
released
just
a
few
days
ago.
Actually,
if
you
go
to
your
vs
code,
if
you're
using
vs
code,
obviously-
and
you
go
through
extensions-
you
can
search
for
the
Tecton
extension
and
there's
a
daikon
pipeline
extension
by
Red
Hat
that
you
can
install
and
once
you
have
this
installed.
C
Let
me
just
bring
this
a
little
bit.
I
can
actually
see
all
the
different
pipelines
that
I
have
installed
on
my
cluster
I
can
see
all
the
different
tasks,
so
we
can
see
that
we
had
this
ello
task,
that
I've
shown
you
and
we
can
see
the
details
of
the
last
round,
which
was
13
13
minutes
ago.
The
NPM
test
was
rant
2
minutes
ago.
I
can
actually
see
all
the
llamó
I
related
to
that,
and
the
same
goes
for
all
of
my
tasks.
C
I
can
actually
see
all
of
the
details
of
my
past
that
HelloWorld
half
that
I
had
earlier.
This
is
what
it
is.
We
also
have
all
of
our
pipelines
and
so
on.
So
everything
that
has
to
do
with
Tecton
is
available
right
there.
Inside
of
our
cluster,
but
what's
really
neat
about
this-
is
that
if
I
actually
open
up
a
pipeline,
I
now
have
access
to
the
pipeline
preview,
but
I
can
see
that
this
disc.
This
specific
pipeline
has
two
different
tasks
and
I
can
jump
from
one
to
the
other.
C
In
my
code,
I
can
jump
to
my
build
tasks
here
and
I
can
see
how
they
are
dependent
and
then
they
will
both
be
executed
before
the
next
one.
Do
you
actually
have
all
of
that
preview,
very
similar
to
the
pipeline
builder,
that
you
have
an
open
ship,
but
in
your
vs
code,
as
you
are
doing
that
development
work,
so
that's
all
I
had
I
will
be
monitoring
the
twitch
chat.
If
there
are
any
questions,
so
please
go
ahead
and
fire
them
away
and
I
will
now
hand
it
over
to
Brian
to
talk
about
service.
C
D
D
Cool
so
yeah
I'm
gonna
talk
about
one
of
my
favorite
things
that
just
came
out
with
up
in
shift4
is
pretty
much
that
serverless
is
now
GA
within
open
stuff.
So
now
it's
generally
available
previously
Fairless
has
been
on
a
tech
preview
and
Developer
Preview
release
and
with
4.4
it's
now
GA.
We
could
consider
it
stable,
at
least
for
the
serving
aspect
of
serverless
and
I'll,
get
into
some
of
the
details
on
here,
but
we
do
have
a
blog
article.
That
is
pretty
good.
That
goes
through
a
lot
of
the
details.
D
Highly
suggest
you
check
out
that
information.
So
one
of
the
things
with
serverless
is
it
allows
you
to
deploy
applications
and
have
them
do
things
that
are
generally
pretty
nice
to
have.
There
would
be
good
recommended
practices
to
deploy
an
application
whenever
you
know
we're
deploying,
maybe
at
scale,
maybe.
C
D
Want
a
particular
application
to
be
able
to
gale
down
to
zero
pods
running,
so
we're
not
wasting
resources
instead
of
having
you
know,
maybe
a
deployment
config
with
one
pod,
that's
always
up
and
always
running.
There's
reasons
to
do
both,
but
serverless
allows
us
to
do
something
where
we
could
scale
that
down
to
zero
and
it's
pretty
cool
it's
pretty
neat
and
on
the
screen
right
now
you
can
see
that
I'm
logged
in
already
I'm.
Looking
at
the
topology
view
on
the
developer
console
and
in
here
I
already,
we
have
a
server
list
application.
D
Our
service
deployment
already
done
and
I'll
show
you
some
examples
of
deploying
a
new
one
in
a
second,
but
first
I
want
to
see
what
are
some
of
the
new
things
that
we
have
within
the
console
to
make
it
easier
to
work
with
serverless
applications.
The
developer
console
has
been
getting
better
and
better
and
it's
it's
still
improving,
and
it
is
pretty
awesome
so
far
right
now
and
we
could
see
that
we
have
in
this
view
the
ability
to
actually
look
at
our
que
service.
D
Our
K
native
service,
OpenShift,
serverless,
uses,
k
native
and
the
k
native
service
is
the
main
aspect
of
a
service.
That's
running
within
surveillance.
We
could
click
on
that
k
native
service
within
the
Vela
/
Kong
console,
and
we
can
see
some
of
the
details.
The
stuff-
that's
really
important
to
me,
like
I,
could
see
that
this
hello
service
has
nothing
running
right
now,
and
it
tells
me
that
hey
all
these
revisions
they're
scaled
down
to
zero.
D
I
didn't
see
anything
crazy.
It
just
took
a
second
to
you
know
to
load
up,
but
I
could
see
hello
world
version
one
touch
her
right,
so
that's
cool
I
could
see
some
of
the
information
here.
That's
within
a
developer
console
I
could
see
that
Hey
80%
of
my
traffic
is
going
to
this
hello.
One
revision
and
20%
of
the
traffic
is
going
to
version
two
revision
and
I
set
this
up
beforehand
and
I'll.
Show
you
how
to
do
it
later
on,
but
this
allows
us
to
do
something
like
a
canary
release.
D
You
can
see
right
now
the
pods,
terminating
and
scaling
back
down
to
zero.
There
hasn't
been
any
traffic
coming
in
for
about
60
seconds,
give
or
take
that
numbers
tunable,
but
that,
basically,
is
the
timing
that
you
know
says:
hey
scale
down
because
nothing's
coming
in
so
with
this
going
back
to
the
traffic
distribution
server.
This
allows
us
to
do
things
like
canary
releases.
That
typically
might
be
a
little
harder
to
do
when
we're
working
with
kubernetes
are
open
chef.
You
know
when
we
roll
out
a
new
version
of
our
application.
D
How
do
we
vet
that
that
application
is
actually
good
and
stable
and
working
the
way
that
it
should
well
server?
This
allows
us
to
go
and
set
some
of
these
percentages.
I
can
roll
out
a
new
revision
right
here
and
a
revision
as
a
snapshot
point
in
time.
Configuration
of
the
service,
so
an
example
would
be
the
new
revision
would
reference
the
new
tag
of
our
container,
because
these
are
just
containers
and
this
you
know
we
could
say
that
80
percent
goes
to
hello,
1
or
20%
goes
to
hello
view.
D
2
and
I
could
change
that
if
I
want
so
I
could
go
in
here
and
I
could
say
that
hey.
You
know
this
is
a
little
bit
better.
Now,
let's
you
know
I
vetted
it
I
looked
at
the
logs,
things
seemed
to
be
ok,
but
let's
make
it
a
50/50
split.
We
can
see
that
I've
got
a
couple
different
other
tags
here
and
you
know
they're
referencing
different
things
and
the
tags
are
pretty
important
because
these
allow
us
to
access
these
revisions
outside
of
the
normal.
Everybody
goes
to
this
route.
I
could
go
to
current.
D
Whatever
you
know,
the
rest
of
the
route
is
or
previous
whatever
the
rest
of
the
route
is
to
hit
a
revision.
That's
specific
to
that
one
and
I'll
show
you
an
example:
once
I
set
this
up
when
I
go
ahead
and
let
me
go
ahead
and
set
these
up
to
it
looks
like
there
might
be
a
bug
in
this
right
now
figure
that
out
it
wouldn't.
Let
me
save
that
it
with
nothing
there
and
zero,
also
wouldn't
work.
D
So
maybe
we'll
figure
out
that
traffic
distribution
or
maybe
I
got
an
update
that
I
need
to
get
to
my
OCP
console,
but
either
way
I
switched
out
the
percentages
here
and
I
could
go
and
see.
I
could
click
on
C,
open
URL
for
hello
v1,
and
this
goes
to
current
hello
right
whenever
I
go
to
revision,
I
could
go
to
previous
hello,
so
that's
pretty
mean
I
could
define
a
route
or
a
sub
route
or
a
traffic
tag.
D
They're
called
both
or
you
know,
interchangeable
terms,
but
I
could
specify
those
and
go
directly
to
one
of
the
revisions
or
I
could
go
to
the
main
hello.
You
know,
service
tutorial
and
I
could
get
all
that
stuff.
So
it's
cool
that
server.
This
makes
me
there
gives
me
the
ability
to
do
this.
Complex
traffic,
you
know,
distribution
and
networking
on
the
cluster
for
new
deployments
and
revisions
without
me
even
having
to
really
think
about
it.
All
I
do
is,
you
know,
specify
I
want
20%
to.
B
D
40%
to
go
here,
what
have
you
right?
That's
pretty
neat.
The
other
thing
that
this
topology
view
gives
us
for
a
que
service
would
be
the
route
right.
We
could
go
and
drill
into
you
know
and
understand.
What's
going
on
with
the
route
check
out
some
of
the
the
configuration?
If
we
wanted
to
look
at
the
animal
and
we
could
see
how
that
stuff's
built
out,
but
one
of
the
things
we're
service
is,
it
doesn't
really
require
me
to
think
about
yeah
mall.
It
makes
it
easier
to
deploy
applications.
D
I,
don't
have
to
worry
about
the
yam.
All
aspect
of
kubernetes
right,
they
don't
have
to
think
about
it.
All
I
need
to
know
is
some
of
this
stuff
around
service.
So
anyways,
let's
go
ahead
and
show
you
how
easy
it
is
to
add
an
application
and
make
it
run
as
a
server.
If
I
go
into
the
developer,
the
the
developer
console
and
hit
add
I
could
again
shoes
just
like
Jan
showed
from
get
container
image,
etc.
Right
I.
D
So
here
we
go
I
just
pasted
in
a
simple
hello
app.
This
is
version
2
of
that
app
I'm,
just
gonna
call
it
leave
it.
The
default
here,
I'm
not
going
to
change
that,
but
I
could
I
could
change
the
name
right.
Jan
showed
you
what
the
application
context
was
in
the
developer
console
and
then
the
hello
app
is
what
it's
going
to
be
called
and
then
under
here
resources
right
I'm
going
to
just
choose
a
native
service.
Well,
yeah,
you
said
hey,
this
is
in
tech
preview
and
you
are
yeah
that
should
be
gone.
D
This
is
GA
now
so
there's
a
there's,
an
issue
with
that,
but
that
that
tag
did
not
be
there.
This
is
definitely
GA
now
a
bit
either
way
it's
there
and
we
could
do
K
native
service
and
I
could
go
and
define
some
of
the
details
around
here.
Right
I
could
go
and
specify
some
of
the
scaling
information
they
that
I
didn't
want
this
to
scale
down
to
zero
like
that
property.
For
this
particular
service
application.
Does
it
make
sense
for
me,
but
other
aspects
of
serverless
do
such
as
the
auto
scaling?
D
That's
already
set
up,
I
could
go
ahead
and
say
that
always
one
run
one
part
of
this
and
I
could
change
some
of
the
concurrency
details.
So
server
list
gives
us
the
ability
to
scale
up
whenever
I
think
it's
a
hundred
requests
are
concurrently
coming
into
our
application.
So
if
we
have
more
than
a
hundred,
then
I
could
go
ahead
and
you
know
scale
up
an
application
by
default.
D
Automatically
I
don't
have
to
think
about
it
and
I
could
change
those
limits
here
if
I
wanted
to,
but
I'm
just
going
to
go
ahead
and
set
us
by
one
part,
always
running
click
on
create'.
What
this
is
doing
is
that
this
is
kicking
off
at
a
native
service
right.
So
this
is
basically
pulling
in
that
image
and
it's
gonna
build
out.
We
can
see
that
it's
running
I've
got
you
know,
one
available
and
I
can
look.
D
D
So
that's
pretty
mean
I
could
see
all
this
stuff
deploy
an
application
and
just
like
I
would
deploy
any
other
app
on
open
shift
and
specify
use
OpenShift
server
lists
instead
of
the
standard
deployment
config
and
there
we
go
we're
good
I
could
see
the
details
within
the
developer
console
to
get
some
of
that
stuff
to
get
the
information
that
I
really
need.
If
I
want
to
I
could
drill
into
the
pod
just
from
here
I
could
look
at
the
law.
It
was
really
quickly.
D
One
of
the
other
big
things
with
server
lists,
like
I
said,
is
the
ability
to
use
this
stuff
without
touching
yeah,
Mel
and
I
showed
you
how
to
apply
an
app
without
touching
em
on
the
console,
but
what,
if
I
like
the
CLI,
better,
there's
command-line
tools
that
allow
us
to
work
with
openshift
server
lists
in
the
command
line,
so
on
my
open
shift
console,
I
could
click
on
the
question
mark
up
at
the
top
and
I
could
see
command
line
tools.
I
click
on
that.
This
is
the
this.
D
Is
the
the
repository
that's
running
on
our
my
openshift,
that
has
the
signed.
You
know
from
Red
Hat
command
line
tools
that
are
here
and
I
could
go
and
download
the
helm
one
or
the
OC
command
or
openshift
do
pretty
cool
but
and
right
I've
got
kin
available
here.
This
is
the
open
shift,
server
list
command
line
interface
and
this
works
with
Linux
Mac
Windows
and
allows
me
to
work
with
the
open
shift
server
list
on
the
command
line.
So
let's
see
that
in
action,
so
let
me
switch
over
to
that.
D
Well,
I'm
all
set
up
now
let
me
make
it
a
little
smaller,
so
exact.
Could
you
go
see?
Who
am
I
just
a
double
check,
I'm
logged
in
so
I'm
logged
in
I'm
good,
so
you
see
Project
I'm
on
the
surrealist
tutorial
project,
so
I'm
good
all
that
stuff
and
make
sure
that
you
know
that's
set
up
and
it's
pretty
nice
to
get
this
stuff
working
by
the
way.
Maybe
there's
a
blog
article.
D
This
you
know
talk
about
how
to
get
this
in
your
prompt
that
we
could
do,
but
anyways
that
might
be
useful
later
on.
So
can,
though,
is
the
command-line
tool
and
I
already
have
this
installed
and
I
could
do
command
line
completion
I
had
that
set
up
available
as
well.
I
just
hit
tab
to
get
some
of
this
detail.
I
could
see
what
the
the
KN
command
line
tool
allows
me
to
do.
D
I
could
work
with
the
same
things
that
you
just
saw
me
do
within
the
console
right,
I
could
do
plugins,
so
I
could
work
with
stuff
like
that,
make
a
work
with
revisions
and
routes
and
services,
and
those
are
things
that
are
really
important.
As
far
as
open
shift
server
lists-
and
the
serving
aspect
goes
one
of
the
things
that
I'm
not
talking
about
a
whole
lot
within
openshift
server
list
is
the
eventing
aspect
of
that,
and
that
is
I
think
to
preview
I'm,
not
quite
sure,
with
openshift.
D
It's
coming
it's
coming
pretty
soon,
and
it
allows
me
to
act
on
events
that
happen
right.
So
I
could
do
things
whenever
a
database
gets
updated
or
whenever
a
file
gets.
You
know
added
to
an
s3
bucket
or
something
something
cool
like
that
right
in
the
eventing
aspect,
using
camel
que
and
all
this
stuff,
the
stuff
that
I'll
talk
about
later
on.
Not
this
call,
but
you
know
once
it
starts
getting
a
little
more
stable
and
that's
what
really
makes
the
service
aspect
shine
right.
D
But
what
we're
talking
about
is
the
serving
aspect
and
the
service
stuff,
so
KN
service
and
then
on
here
I
can
see
that
I
could
create
something
I
could
list
something
I
could
delete
it,
etc.
So,
let's
just
let's
see
what
we
have
a
little
smaller,
so
it
all
fits
I
go.
So
we
can
see
that
I've
got
that
hello.
D
App
that
I
just
created
and
it's
running
the
version
two
right
there-
sorry
hello,
app
it's
running
a
revision
name,
that's
automatically
generated,
and
then
I
have
hello
version
two,
which
is
the
existing
one
that
I
had
before
and
I
could
do
stuff.
That's
pretty
neat
Erica
do
KN
describe
our
Sark's
or
a
service.
Descried,
say
hello,
and
in
here
I
could
get
some
of
the
details
with
that.
I
could
see
that
hey
I
got
those
percentages
that
really
47
goes
here.
One
percent
goes
here,
except
all
right.
I
could
see
that
information.
D
I
could
see
the
tags
that
I
talked
about
hello,
latest
preview,
etc.
Right
and
some
of
that
detail.
I
could
do
KN
revision
list
and
I
could
see
all
the
revisions
that
I
have
that
these
guys
are
pointing
to
I
can
see.
I've
got
revision
version
2
and
then
version
1
I
specified
these
specifically
and
I'll.
Show
you
how
to
do
that
and
then
I've
got
that
apps,
which
is
the
one
that
I
created
within
the
console.
D
So
we
could
see
all
that
stuff
there
and
I
could
describe
some
of
that
and
get
more
detailed,
but
we've
seen
most
of
it
in
the
service.
So
I
don't
think
that
we
need
to
pan
route
and
do
lists.
I
could
see
all
the
routes
that
have
available
to
me
and
maybe
describe
the
hello
route,
because
it's
a
little
more
complex
right,
I've
got
the
same
stuff.
D
I
could
see
traffic
targets
and
I
could
see
the
URL
for
each
individual
traffic
tag,
so
I
could
go
and
go
directly
to
the
previous
one,
for
example,
if
I
wanted
to
do,
curl
and
I
could
curl
that
it'll
take
a
second
again
to
you,
know
spin
up
that
container,
but
there
we
go
it's
not
too
long
and
I
can
see
the
little
world
and
there
we
go
that
stuff's
pretty
neat.
So
if
I
wanted
to
say,
create
something,
let's
go
and
look
so
and
service
create,
and
let's
just
do
hello.
D
D
Service
creates
H,
and
in
here
I
could
see
hey.
This
is
the
help
and
you
can
see
there's
a
ton
of
different
flags
and
examples
like
creating
a
service
with
multiple
environment
variables.
This
help
page
for
the
km
tool
is
really
nice
and
it
gives
you
a
lot
of
the
stuff
that
you
would
want
to
create
a
KN
service
in
the
family
and
really
easily,
and
it's
pretty
neat
right.
This
one
command
can
service,
create,
deployed
an
application
using
an
image
specified.
D
Some
of
the
stuff
like
I
know
in
kubernetes,
I
need
a
namespace
and
I
I
want
to
call
it
a
specific
revision
name
so
I
know
I
want
that
right,
but
like
if
I
wanted
to
specify
you
know
the
minimum
scale.
I
could
specify
that
in
here,
right
and
I
could
specify
environment
variables
using
the
e
with
some
of
the
details,
and
it
tells
me
what
I
need
to
do
to
do
that.
So,
instead
of
me
having
to
know
all
of
the
Y
amel
that's
associated
with
that,
which
is
pretty,
you
know,
there's
a
lot.
D
D
The
service
is
not
serving
dot,
K
native
dot,
dev
and
then
hello,
one,
oh
yeah,
Mille,
to
see
the
yellow
output
of
what
that
created.
For
me
right,
we
could
see
that
hey.
This
is
all
this
stuff.
I
created
it
from
the
details,
but
I
don't
really
care
if
I'm,
creating
and
I
would
use
the
mo
under
the
spec
stuff
and
I
would
see
that
hey
this
is
you
know
the
image
I
specified
I
need
to
specify
a
name
in
here.
Give
it
a
stack
like
I.
D
Don't
need
to
know
any
of
this
to
deploy
my
application.
It
doesn't
matter
I'm,
just
using
the
KN
tool
specified
in
the
command
line
and
tab-completion
helps
me
set
up
these
flags.
It's
pretty
nice
and
the
KN
tool
is
really
robust.
It
does
quite
a
bit
and
it's
improving
each
revision
or
each
version
of
openshift
server
list
and,
like
I,
said
as
a
venting.
You
know
getting
more
and
more
out
there
I'd.
Imagine,
there's
gonna
be
a
lot
of
really
cool
things.
We
could
do
just
with
the
KN
tool
and
eventing
and
well
server.
D
A
D
Definitely
yes,
so
with
tact
on
our
openshift
pipelines
as
well
as
openshift
server
less.
Both
of
those
require
a
clustered
admin
to
go
on
to
that
cluster
and
install
the
operator.
You
could
look
into
the
operator
hub.
That's
on
the
sidebar
of
openshift
and
the
admin
view
go
and
install
the
server
list,
one
or
the
pipeline's
one
into
the
open
shift.
Operators
namespace,
you
could
look,
there's
instructions
on
our
docs
on
how
to
set
that
stuff
up
with
pipelines.
You're
done
at
that
point,
you
can
start
using
it
with
open
ships.
D
You
have
to
then
create
a
que
native
serving
project
and
deploy
a
instance
of
que
native
serving
you
just
look
into
the
installed
operators
and
look
at
it,
open
ships,
herbalists
and
deploy
that,
and
you
can
specify
and
customize
the
installation
with
it.
But
by
default
the
defaults
work
in
there
pretty
nice
and
what
I
showed
I
didn't
customize
anything
anything
everything
worked.
So
those
two
steps
once
I'm
done
there,
any
user
could
deploy
a
pipeline
or
a
OpenShift
server,
less
deployment.
A
And
I'm
gonna
ask
one
question
which
is
sort
of
a
setup,
because
you
kind
of
asked
it
and
I
think
you
wanted
Joel
to
answer
it.
So
are
there
plans
for
following
the
key,
Olli
approach
and
show
usage
based
top
you
calculate
top,
not
topiary,
topology,
and
because
we've
been
talking
about
that
a
little
bit
and
the
key
ally
view
is
pretty
nice.
But
you
listen
answer
for
that.
A
Joel
I
think
you
might
have
asked
answered
a
little
bit
in
the
chat
there
as
well,
but
it's
a
pretty
the
difference
between
the
key
ally
and
the
service
mesh
approach
to
things
so.
D
There
was
that,
and
that
was
within
the
topology
view-
sorry
not
Joel,
but
within
the
topology
ecology
view
you
could
connect
applications
and
whatnot
together
and
it
would
be
nice
to
maybe
see
the
traffic
flowing
through
there.
If
maybe
service
meshes
installed
or
whatnot,
that's
what
I
was
thinking
I
don't
know.
Maybe
there
are
there.
Are
there
questions
I,
remember
hearing
some
of
the
talk
about
some
of
that
earlier
this
week
and
not
quite
sure,
it's
not
you
know.
D
C
Yeah
I
saw
that
one
so
using
pipelines
quit
I,
don't
have
the
answer
for
that.
Actually,
so
I
will
need
to
do
a
little
bit
more
research
and
yeah.
If
whoever
asked
the
question
want
to
get
in
touch
with
me,
maybe
in
twitch,
and
we
can
establish
a
way
to
follow
up
afterwards.
I'll
definitely
follow
up
on
that.
D
Yeah,
so
with
that
I
posted
a
link,
hopefully
you
got
it
I'm,
not
sure
if
it
was
on
Twitter
or
whatnot,
but
if
not
feel
free
to
reach
out
to
me
or
Jolin
Shore,
but
get
there
door
jam.
We
all
could
help
you
out,
but
just
basically
look
at
the
documentation
for
setting
up
a
web
book
on
OpenShift
and
there's
instructions
in
there
on
how
to
do
it
with
bitbucket.
Specifically,
so
in
there
you
definitely
could
do.
You
could
set
up
a
bit
like
a
book.
D
A
C
A
A
If
there
are
other
questions
coming
in
I
think
you
guys
have
done
a
really
great
job,
exploring
some
of
these
new
features
and
I'm
really
looking
forward
to
getting
you
guys
back
again
on
a
regular
cadence,
because
I
think
this
is
a
great
way
to
educate
the
openshift
community
and
get
you
guys
some
recognition
for
all
the
hard
work
you
do.
Demoing
everything
and
making
new
things
understandable
and
comprehensible,
especially
with
all
the
new
features
coming
out
in
each
of
the
new
releases.
This
is
it's
a
lot
to
chew
off.
A
A
There
was
one
other
question:
well,
I'm
all
here
thanking
you
and
I.
Think
it's
an
interesting
one
from
was
about
them.
The
pipeline's,
if
they're
meant
to
being
used
with
other
CI
CD
systems
like
Azure,
github,
X
actions,
etc,
or
are
these
Tecton
pipelines
meant
to
be
used
just
by
themselves?
I,
don't
know
not
asked
to
answer
that
question.
C
A
World
it
was
out
in
the
real
world
where
people
are
trying
to
do
hybrid
cloud,
all
the
time
and
figure
out
how
to
make
all
of
these
systems
mesh,
though,
which
makes
another
layer
of
complexity
on
everything.
Lots
of
new
features,
not
lots
of
new
platforms.
All
the
platforms
have
their
own
approaches
and
tools.
So
again,
we'll
have
you
guys
back
many
times,
I'm
sure
it's
wonderful
to
have
you
here
today,
Diane.
B
A
All
right
and
with
that
we'll
thank
our
producer,
a
Chris
short
again
for
backing
us
up
and
streaming
us
live
everywhere.
We
can
possibly
find
a
stream
to
be
on
and
we'll
be
back
again
tomorrow.
Tomorrow
we
have
Andrew
clay
Schafer
from
the
global
transformation
office,
is
going
to
talk
about
cloud
native
operating
models
and
he's
one
of
our
gurus
on
DevOps.
So
if
you're
around
join
us
again
tomorrow
at
9:00
a.m.
Pacific,
12:00
noon
and
I
think
it's
1600
UTC
somewhere
in
the
world,
but
you
can
check
the
calendar
and
we'll
be
there
too.
A
So
looking
forward
to
hearing
other
things
that
you
guys
want
to
talk
about
Brian
and
Jan,
and
then
maybe
we
can
drag
Jason
Dobies
out
with
Josh
woods
to
talk
more
about
kubernetes
operators
as
they
want
to
do
as
much
as
possible,
so
yeah
and
they're
in
showcase
the
work
that
they
did
in
that
wonderful
book.
So
yeah
you'll
get
a
couple
thumbs
up
on
getting
them
to
talk
more
about
operators,
never
talked
enough
about
operators.
Oh
there's
the
book,
there's
the
plug!
You
can
download
it
now
and
get
it
there.
You
go,
though.
A
All
right
with
that
Chris
I
think
we're
gonna
hang
up
our
hat
the
Red
Hat
for
the
the
afternoon
and
let
it
rip
and
we'll
see
you
all
again
soon.
I
will
again
also
post
this
up
on
YouTube
with
some
of
the
ins
and
outs
edited
out.
So
if
you
want
to
watch
it
again
in
your
leisure,
we'll
we'll
have
it
all
there
probably
later
this
afternoon.