►
Description
Join us for an overview and demo of deploying a Quarkus based application in OpenShift using Helm.
Speakers: Austin Dewey (Red Hat) and Andrew Block (Red Hat)
Host: Karena Angell
A
Welcome
everyone
to
another
great
openshift
commons
briefing
super
excited
about
this
one.
I
am
your
host
karine
angel,
one
of
the
openshift
product
managers,
and
we
are
here
with
austin,
dewey
and
andrew
block,
some
of
our
amazing
consultants,
and
actually
they
do
pretty
much
everything
so
we're
here
to
talk
about
corcus
and
helm
and
how
to
deploy
an
open
shift
and
austin.
Please
take
it
away.
B
All
right
so
today
we're
going
to
talk
about
deploying
a
corker
space
app
on
openshift.
My
name
is
austin
dewey,
a
senior
consultant
here
at
red
hat,
I'm
working
with
openshift
for
the
past
three
years.
Now
andy,
I
don't
know:
if
did
you
want
to
introduce
yourself
real,
quick.
C
Yeah
sure
my
name
is
andy
block.
I'm
a
distinguished
architect
with
red
hat,
been
working
with
customers
across
the
globe
for
the
last
oh
and
ish
years,
give
or
take
and
really
want
to
talk
about
helen
and
how
we
can
use
helm
to
deploy
corporate
space
applications.
B
All
right
all
right,
so
if
you're
here,
you
probably
already
have
a
understanding
of
quarkx
or
you've
at
least
heard
of
it,
but
corkus
is
our
kubernetes
native
java
stack
and
some
of
the
features
of
it
here
are
very
low
memory
usage
very
fast
boot
time
at
first
response
time
and
very
high
throughput,
and,
as
you
can
see
here,
if
you're
looking
real
close
at
this,
you
can
actually
see
there's
a
quarkus
plus
native
and
a
quarkus
plus
jit,
there's
two
different
ways.
You
can
actually
run
a
quarkus
app.
B
So
just
a
quick
little
refresh.
This
is
kind
of
a
high
level
abstraction
of
how
openshift
bill
configs
work
and
how
you
would
interact
with
it
to
build
a
quarkus
image.
So
first,
the
bill
config
would
clone
your
git
repo.
Once
that's
done
behind
the
scenes,
the
bill
config.
It
actually
spawns
a
builder
pod
and
it
runs
your
build
and
a
build
pod
and
then,
once
that's
finished
it
just
ships
that
off
the
internal
registry
and
associates
a
image
stream
to
that
now.
The
challenge
here
is
okay,
so
we
need
to
build
config.
B
B
How
that
you
know
how
how
the
yaml
is
formatted
or
what
what
openshift
commands
to
run
and
the
quickest
side
of
it
comes
in,
because
you
need
to
know
okay,
well,
what
s2i
builder
do
I
need
to
use
to
build
my
corkus
at
how
should
my
docker
file
be
written,
and
how
do
I
actually
create
my
docker
file
to
create
a
efficient
build?
B
So,
of
course,
building
is
only
the
first
part.
You
know
once
you
build
that
now,
you
need
to
think
about
how
you're
going
to
deploy
it
and
also
is
going
to
require
both
openshift
and
quarkus
expertise.
The
openshift
side
comes
in
because
you
need
to
know
what
are
the
options
that
are
available
to
you
right,
there's,
different
openshift
resources,
there's
deployment
service,
config
map
and.
A
B
B
So
this
is
where
something
like
helm
comes
in.
Helm
is
known
as
the
kubernetes
package
manager,
and
so
you
think
of
something
like
dnf
right.
You
would
run
dnf
install
whatever
it
is
that
you
want
to
install
after
get
install
helm,
falls
right
in
line
with
that
helm
install
so
we're
using
a
kubernetes
package
manager
to
as
as
we'll
get
to
install
the
corkus
app
onto
openshift.
B
B
It
also
has
a
very
active
community
and
I
think,
there's
about
you
know
a
handful
of
maintainers
and
they're
all
very
quick
to
answer
questions.
Close
issues
implement
new
features.
It's
a
very,
very
fast-moving
project,
so
helm
creates
a
wrapper
around
openshift
resources
called
charts,
and
so,
as
we'll
see
later,
it's
going
to
serve
as
a
wrapper
around
all
of
those
resources
required
to
deploy
a
quarkus
app.
B
Helm
also
allows
yaml
definitions
to
be
dynamically
generated.
So
what
you
see
here,
what
I
highlighted
in
red
is
kind
of
those
dynamic
sections
so,
for
example,
in
a
build
config.
If,
if
you
want
to
use
a
git
source,
you
probably
need
to
tell
it
what
your
uri
is
right,
where
they're
actually
going
to
download
that
that
code
from
that's
something
that
the
user
can
tell
helm.
B
B
Just
tell
it:
what
are
the
important
parts
here
and
it'll
go
in
and
actually
populate
those
for
you,
so
users
interact
with
their
installation
and
they
configure
it
using
what's
called
values.
So
values
are
just
parameters
essentially,
but
what
you
do
is
you
create
a
values
file?
That's
when
you
go
in
and
you
actually
create
your
different
parameters,
so
the
build
build
uri.
I
have
my
git
repo
there,
my
contextual
within
that
my
build
mode
which
is
really
cool,
we'll
actually
get
to
that
one
and
then
my
build
or
sorry
my
deployment
resources.
B
B
I
think,
there's
a
a
no
js
one
under
there
helm,
install
corpus,
app
red
hat,
charts,
slash
corpus
and
then
point
it
to
your
values
file
and
you
can
also
interact
with
it
from
the
ui
as
well.
All
an
administrator
has
to
do
is
create
a
home
chart,
repository
resource
and
openshift,
and
then
developers
can
install
any
home
chart
from
there
from
the
ui.
So
that's
a
really
cool
feature
all
right,
so
introducing
for
the
very
first
time
the
marriage
of
corkus
and
helm
present
to
you,
the
quarkus
helm
chart.
B
C
And
one
of
the
best
parts
about
this
is
this:
repository
is
automatically
available
inside
your
openshift
cluster
right
now.
So
if
you
have
an
openshift
cluster
and
you
go
to
the
developer
console
perspective,
I
meant
you'll
see
it.
There
check
it
out
and
we'll
walk
through
an
example.
A
little
later
on.
B
A
B
Mentioned
at
the
beginning,
there's
a
couple
of
different
ways:
you
can
build
and
run
a
corpus,
app,
there's
jvm
mode
and
there's
native
mode
well
in
the
helm,
chart
we
abstract
a
lot
of
this
complexity
with
a
single
value.
It's
called
build.mode,
so
I
can
say
my
mode
is
jvm
and
if
I
say
that
I'm
just
going
to
create
a
basically
a
s2i
build,
it's
going
to
hand
pick
the
the
best
s2i
builder
for
it,
which
is
probably
java.
11,
is
the
one
that
most
people
want
to
use.
B
Kick
off
that
build
automatically,
so
it
handles
a
lot
of
things
already
for
you.
If
you
choose
building
build.mode
equals
native,
then
it
will
give
you
a
default
inline
docker
file
with
an
additional
built-in
input,
and
if
you
want
to
override
that
docker
file,
of
course
you
can,
but
it
tries
to
give
you
as
much
as
possible
by
default,
so
you
have
to
do
as
little
as
work
as
possible.
B
So
here's
kind
of
what
that
looks
like
in
your
build
config.
If
you
choose
a
native
build,
I
I
wanted
to
point
out
the
default
docker
file
that
it
uses.
So
it
actually
gives
you
a
multi-stage
docker
file
here
in
the
mandrel
20
rail
8
image
is
where
it's
it's
building
your
native
binary
and
then
in
the
second
stage.
It
just
copies
it
over
to
ubi
minimal
image,
to
give
you
as
small,
of
a
runtime
as
possible.
B
Another
feature
here
is
externalized
application
properties
so
oftentimes
when
you're
working
with
java
applications,
you
have
an
application
properties
file
you'd
like
to
externalize
that
from
your
app,
so
you
don't
have
to
recompile
it.
You
know
between
dev
tests
problems,
any
other
stages
that
you
have
just
extract,
those
application
properties
and
you
can
use
the
same
image
throughout
all
your
stages.
The
quarkus
home
chart
makes
this
really
easy
by
using
the
application
properties
values.
B
So
what
it'll
do
is
it'll
create
a
config
map,
with
your
specified
application
properties
that
you
set
and
it'll
automatically
create
a
volume
mount
on
your
deployment
at
deployment
config.
So
all
you
have
to
do
is
use
the
values,
tell
them
what
your
properties
are
and
it'll
automatically
create.
All
that
configuration
behind
the
scenes,
config
change
and
image
change
figures.
So
this
is
a
really
cool
feature
where,
as
soon
as
you
install
the
helm
chart,
the
build
will
automatically
start.
So
you
can
get
up,
go
make
a
snack
go
make
a
cup
of
coffee.
B
By
the
time
you
come
back,
those
pods
will
be
automatically
rolled
out,
so
it
creates
the
config
change
triggers
and
the
image
change
triggers
in
these
image
resources
for
you
by
default,
and
one
last
thing
that
I
want
to
call
here
is
freeform
fields.
You
know,
as
you
guys
know,
there's
a
lot
of
different
ways.
You
can
configure
these
openshift
resources.
We
don't
want
to
try
to
box
people
into
a
very
particular
configuration.
B
We
want
to
give
people
the
ability
to
set
things
like
environment
variables
in
it
and
extra
sidecar
containers,
liveness
and
radiance
probes,
so
on
and
so
forth.
So
we
have
some
what
I'd
call
freeform
values
that
you
can
actually
just
pass
in
verbatim
pod
templates
or
you
know,
volume
mounts
to
configure
your
application.
However,
you
see
fit,
and
so
the
end
result
here.
What
what
you'll
see
is
these
six
resources?
B
So,
on
the
build
side,
you'll
see
the
build
config
and
the
image
stream,
and
you
know
again
like
it's
automatically
built
as
soon
as
you
install
the
helm
chart
and
then
once
the
build
is
finished,
the
deployment
will
spin
up
and
any
other
resources
that
are
required
for
your
installation.
Are
there
also-
and
this
can
be
done
with
as
little
as
one
value
here.
B
This
is
two
lines
of
vmware
to
create
an
entire
corkus,
app
tell
it
what
your
build
uri
is
it'll
go
out,
clone
it
build
it,
deploy
it
and,
of
course,
there's
several
other
values
that
you
can
use,
but
we
we
try
to
provide
as
many
defaults
as
you
can
just
try
to
try
to
bake
in
those
best
practices
right
from
the
get
go
and
if
you
need
to
override
anything.
Of
course,
you
can.
B
B
If
you
want
to
check
these
values
and
to
get
what
you'll
probably
see
is
maybe
some
kind
of
a
helm,
folder
or
dot
helm
folder
alongside
your
application
files.
So
in
this
case
I
have
a
dot,
helm,
values.yaml
file,
and
this
is
really
useful,
because
a
ci
cd
tool
can
just
pick
up
this
values
file
and
run.
A
B
B
So
there's
a
couple
different
ways
that
you
can
interact
with
the
with
the
helm
chart
a
couple
of
just
quick
ways
to
maybe
you
know,
learn
more
or
get
started.
I
just
left
a
couple
different
resources
here,
so
the
quark
is
guided.
There
are
many
different
quick
starts
that
the
quarkus
team
has
put
together
they're,
all
just
awesome,
quick
start,
so
I
highly
recommend
checking
those
out.
B
B
I
would
be
remiss
if
I
didn't
plug
learn
helm.
This
is
a
book
that
me
and
andy
wrote
and
it
was
released
earlier
this
year.
You'll
learn
about
all
the
cool
things
that
I
talked
about.
You
know
about
home
and
much
more
in
that
book,
so
definitely
recommend
checking
that
out
and
if
you
have
any
other
helm
questions,
I
let
their
slack
there
documentation
and
some
other
resources
and
so
without
further
ado,
andy's
going
to
show
you
how
to
build
and
deploy
a
self-peeling
forecast
application
using
the
quarkus
home
chart.
Andy.
C
All
right,
hopefully,
everyone
can
see
my
screen
so
as
austin
just
walked
through.
We
are
going
to
demonstrate
how
to
deploy
a
simple
orcas
application
using
a
helm
charge.
This
is
a
brand
new,
fresh
off
the
shelf,
openshift
4.6
environment
and
we're
going
to
demonstrate
a
lot
of
the
features
that
are
part
of
openshift,
including
helm,
support,
as
well
as
the
ability
to
leverage
the
chart
that
austin
just
walked
through
first
step.
C
C
So
if
you
go
ahead
and
you
use
the
quarkx
open
api
swagger
ui,
this
is
the
quick
start-
that
kind
of
walks
through
how
to
use
the
swagger
apis
to
deploy
a
simple
microservice
using
using
quarkus.
C
There
was
a
recent
blog
post
that
you
know
triggered
my
fancy
regarding
how
to
leverage
small
rai,
a
way
to
enhance
your
quarkx
applications
and
other
cloud
native
applications
to
add
additional
metrics
and
monitoring,
and
that's
basically
what
I
did
is.
I
took
this
example
and
then
just
added
the
small
right
extension
and
away
we
went
so,
let's
go
ahead
and
let's
deploy
that
into
openshift.
C
So
from
the
topology
from
the
list
of
options
on
what's
deployed
in
openshift,
let's
go
ahead
and
pick
a
hell
charge
makes
sense,
as
I
mentioned
earlier.
These
are
the
out
of
the
box
example
that
come
with
openshift
we're
going
to
be
expanding
these
in
future
releases.
But,
as
you
see
here
at
the
bottom,
we
have
a
brand
new
corkus
helm
chart,
let's
go
ahead
and
click
on
install
helm
chart
first
thing
we
want
to
do
is
we
want
to
give
it
a
name?
C
If
there
are
certain
values
that
you
do
need
to
include,
and
one
of
them
is
the
location
of
the
chart
or
the
filled
uri
that
you
want
to
deploy.
So
I'm
going
to
go
over
here
and
pull
in
my
set
of
values
that
will
have
my
customizations
and
we're
going
to
notice
a
few
things,
we'll
kind
of
walk
through
this
chart
as
well
on
the
deploy
side,
we're
going
to
basically
say
that
we
don't
want
any
probes.
C
We
don't
want
to
have
any
liveliness
or
readiness
probes
and
then
we
want
to
talk
about
our
build
side.
This
is
the
most
important
part.
Where
is
our
code
going
to
be
built
from
it's
going
to
be
built
from
my
personal
repository,
which
is
basically
just
a
fork
of
the
corkus
quickstart
applications
we're
going
to
reference
the
open
api,
small
ride
branch,
which
is
basically
just
that
sample
quarkus
application
with
the
x
additional
extension
for
smaller
eye,
and
then
we're
going
to
set
the
couple
in
well?
C
At
least
one
environment
variable
basically
going
to
tell
open
shift
as
part
of
the
source
to
image
process
what
files
we
want
to
include
in
the
deployments
directory,
so
that
when
we
want
to
start
the
application,
it
will
have
everything
it
needs
to
start
and
then
finally,
I
did
skip
over
the
context
directory.
Basically,
within
this
repository,
which
directory
has
my
source
code
in
it.
C
So
we
have
our
release
name,
we
have
our
set
of
values,
let's
go
ahead
and
click
on
install
and
it's
going
to
go
ahead
and
start
up
and
deploy
that
helm
chart
and
by
doing
so,
you'll
see
all
the
different
resources
that
were
created.
We
have
a
deployment,
a
build
config,
a
service,
an
image
stream
and
a
route.
C
So
we
can
access
the
application
first
thing
that
was
triggered
was
a
brand
new,
build,
let's
go
ahead
and
click
on
builds
and
we
can
see
we
have
a
running
build
now
for
those
of
you
who
know
java,
you
know
that
when
you're,
when
you're
building
a
java
application,
it
will
take
a
few
moments
for
it
to,
as
I
say,
download
the
world.
So
it's
going
to
basically
do
a
typical
maven
based
build
for
this
application.
C
So
if
I
take
about
a
minute,
while
it
does
that,
let's
just
go
ahead
and
just
browse
around
and
see
what
else
it
deployed
or
stood
up,
what
you
will
notice
is
that
it
will
go
ahead
and
have
an
image
pull
back
off.
That's
because
we're
leveraging
a
deployment,
and
since
there
is
no
image
that
has
been
built,
yet
it's
going
to
fail,
which
is
okay,
because
if
we
go
back
to
the
helm
chart
that
we
just
created,
which
you
will
know
notice,
is
that
there's
a
set
of
release,
notes
that
come
with
that.
C
So
after
we
deploy
the
application,
if
you
ever
use
the
user
interf
the
command
line
tool
for
helm,
you'll
notice
that
whenever
you
install
or
upgrade
a
chart,
you
might
see
some
additional
helper
text.
This
is
through
what
they
call
release
notes
and,
as
you
see
here,
we
even
call
out
specifically
that
your
deployments
will
report
this
image.
Air
pull
because
it's
waiting
for
a
build
very
important.
I've
had
a
lot
of
customers
and
other
other
individuals.
C
Who've
been
working
with
helen
charts
and
just
openshift
in
general,
get
confused
when
they're
leveraging
native
deployment
resources,
because
it
doesn't
have
that
same
integrated
functionality
to
wait
for
the
application
to
be
built
until
it
deploys
it
out.
So
one
of
the
things
to
be
to
be
just
to
keep
in
mind,
let's
go
back
to
the
build
again
and
we
should
be
able
to
see
that
we
are
able
to
hopefully
finish
that
build.
C
And
once
it
comes
up,
we
should
be
able
to
see
that
application
running.
As
you
see,
we
get
the
nice
beautiful,
blue
orb,
we
can
go
in
and
click
on
the
url
we
go
to
the
url
and
we
get
this
wonderful.
Application
has
been
deployed
in
just
a
few
steps,
we're
able
to
get
our
application
deployed
and
running
and
that's
great
perfect,
but
we
didn't
really
follow
best
practices
when
it
comes
to
proper
kubernetes
and
open
shift
application
deployments.
C
Do
you
want
to
know
why?
Because
we
didn't
set
any
liveness
or
readiness
probes,
we
go
back
to
our
application.
You'll
see
that
is
missing
and
that's
not
good
practice
always
make
sure
you
have
readiness
and
liveness
probes
of
some
sort
to
the
kubernetes
and
openshift
know
when
your
application
is
running
or
not.
C
This
is
the
pod
definition
as
you'll
see
here.
There's
nothing,
there's
no
way
for
the
application
to
tell
whether
it's
healthy
and
running
and
that's
a
problem
when
it
comes
to
long
living
or
short
living
microservices,
because
in
a
cloud
native
environment
date
is
not
guaranteed.
Do
you
always
want
to
make
sure
you
have
all
the
health
checks
possible?
C
One
of
the
benefits
is
that
not
only
does
our
the
helm
corpus
chart
support
this,
but
this
application
also
exposes
liveness
and
readiness
probes
and
that's
through
that
small
rye
extension,
we
go
back
to
the
application
itself
and
we
go
to
slash
health.
Slash
ready,
I.e.
Is
this
application
ready
to
take
on
traffic
you'll,
see
that
it's
returning
the
status
up?
It
is
ready.
It'll
return
to
200
and
openshift
will
be
able
to
determine
that
you
can
start
sending
traffic
to
it
same
thing
on
a
liveness
probe.
C
We
can
just
go
to
the
slash,
live
and
we
can
say
yep
I'm
alive,
so
we
should
go
ahead
and
make
sure
that
we
don't
delete
any
of
the
pods,
because
it's
unhealthy.
It
is
healthy
and
we
can
go
ahead
and
make
sure
traffic
and
application
state
is
still
being
routed
to
it.
Okay,
so
we
don't
have
our
current
chart
or
current
application
implementing
liveness
or
readiness
probes.
How
do
we
do
that?
Well,
let's
go
ahead
and
upgrade
our
application
to
a
new
revision.
C
If
you
want
to
a
lot
of
the
functionality
that
you
have
within
the
cli
is
exposed
in
the
openshift
web
console
going
back
over
now
to
the
topology
view,
we
should
be
able
to
see
the
application
deployed
again.
If
we
go
look
at
the
pod
definition,
we
should
be
able
to
see
the
updated
version
with
the
proper
checks.
Now,
in
line.
C
As
you
see
here,
we
have
our
liveness
probe
now
defined
and
scrolling
up
a
little
further
up.
We
now
have
our
readiness
probe
we're
able
to
see.
We
are
now
fully
running.
The
application
is
not
crashing,
because
if
it
was
not
probing
correctly,
the
application
would
start
to
restart
and
traffic
would
not
be
able
to
be
routed
to
the
application.
C
C
So
if
we
go
over
here
to
the
slash
metrics,
endpoints
you'll
see
we're
now
able
to
actually
get
different
metrics
regarding
our
application
and
openshift
makes
it
really
easy
for
you
to
integrate
application
metrics
into
the
console.
So
you
can
start
monitoring
application
metrics
quickly
and
easily,
and
that's
through
the
use
of
service
monitors
and
especially
for
end
users.
The
end
user
workload
monitoring
feature
of
openshift.
C
If
you
and
your
cluster
administrators
have
enabled
that
feature,
it
makes
it
very
easy
for
you
to
go
in
and
be
able
to
monitor
your
application
quickly,
and
I'm
going
to
show
that
really
fast,
because
that
was
enabled
inside
my
cluster.
So
how
do
we
go
ahead
and
monitor
my
own
application?
I
just
deployed
doing
so
we'll
go
in
to
there
is
this
brand
new
monitoring
tab
called
newish
over
on
the
openshift
side
in
text,
review
status
still
and
there's
a
way
to
go
ahead
and
grab
metrics?
C
You
can
see
a
lot
of
different
cpu
and
a
lot
of
a
lot
of
different
options
out
of
the
box,
but
we're
going
to
first
need
to
tell
openshift
how
to
monitor
my
application
and
to
do
so.
We,
as
I
mentioned
previously,
we're
going
to
set
up
this
m.
This
service
monitor
object.
So
let
me
just
copy
this
over
and
we
can
kind
of
walk
through
what
a
service
monitor
does.
C
First
thing
we
want
to
do.
Is
we
want
to
just
give
it
a
name
we'll
just
call
it
open
api
smalleri?
We
then
need
to
tell
us
which
endpoints
to
go
ahead
and
reach.
This
is
basically
what
ports,
what
service
port,
what
scheme
and
how
often
I
should
go
and
monitor
for
the
application
and,
most
importantly,
based
upon
the
the
ports
that
were
determined.
What
pods
do
I
filter
on
so
of
all
the
pods
that
are
running
inside
my
project?
C
C
name
and
just
say,
which
is
basically
my
pod
name,
go
ahead
and
monitor
it
at
the
slash
metrics
endpoint.
If
your,
if
your
application
was
happen
to
be
exposing
metrics
at
a
different
endpoint,
you
can
customize
the
service
monitor
to
monitor
it
differently
to
a
different
endpoint.
Let's
go
ahead
and
click
on
create
and
within
30
seconds.
That's
how
often
I
told
it
to
monitor,
for
we
should
be
able
to
start
looking
for
metrics
inside
our
monitoring
tab.
C
B
C
Any
one
of
these
different
metrics
and
then
prometheus
will
start
collecting
it,
and
then
you
can
start
to
to
be
able
to
graph
it
within
the
user
interface.
Now
I'm
going
to
go
ahead
back
to
my
handy
dandy,
cheat
sheet,
that's
what
I
love
having
to
have
just
in
case,
because
you
know
demos
are
fun.
A
few
live
demos
we'll
go
ahead
and
grab
the
amount
of
heat
size
that
is
currently
being
used
by
my
application.
C
So
we
can
go
back
over
here
to
the
metrics
tab,
click
on
custom
query:
we
can
go
ahead
and
grab
that
you'll
see
that
we
can
now
pull
in
information.
You
can
see
the
information
started
about
two
or
three
about
about
a
minute
ago
at
11,
29
central
time,
and
it
will
then
shave.
This
is
the
pod
is
running
just
to
also
demonstrate
another
pod.
C
C
See
if
it's,
if
it's
gotten
a
chance
to
pull
that
up
yet
and
it
has
you'll
see
now
we
have
two
pods
that
are
running
and
in
a
moment
we'll
be
able
to
start
tracking
that
and
see
a
comparison
between
the
two
so
as
you've
seen
thus
far,
we've
gone
ahead
and
deployed
a
quarkus
based
application,
builds
it
and
deployed
it.
Using
a
helm,
chart
we've
upgraded
the
chart,
we've
showed
how
we
can
now
monitor
the
application.
C
Now,
let's
take
that
one
step
forward
and
show
okay,
we
went
ahead
and
leveraged
the
out
of
the
box,
helm
charts.
But
what
happens
if
you
happen
to
have
a
helm?
Chart
that
extent
you
know
extending
it
to
other
helm
repositories.
Let's
just
say
that
you
wanted
to
extend
or
customize
the
forecast-based
charts
that
your
organization
might
have.
C
Maybe
you
wanted
to
take
the
chart
that
was
provided
by
out
of
the
box
in
the
openshift
console
and
customize
it,
because
your
organization
might
have
some
specific
features
or
customizations
that
you
just
couldn't
get
out
of
our
chart?
Let's
show
how
we
can
leverage
the
openshift
console
to
demonstrate
how
to
extend
the
functionality
of
openshift
by
adding
a
custom
helm
repository
to
do
so.
C
Helm
repository
the
head
right
at
community
practice.
If
you're
unaware
is
a
group
of
individuals
within
red
hat
that
shares
great
knowledge
around
in
specific,
in
this
example,
container
knowledge
and
helm,
obviously
being
a
container
technology.
We
want.
We
wanted
to
go
ahead
and
show
some
charts
that
the
team
and
the
group
within
red
hat
has
had
a
chance
to
work
on.
So
to
do
so,
we'll
go
ahead
and
we
will
add
this
brand
new
chart
repository,
which
is
basically
just
pulling
the
chart
repository
out
of
github.
C
We
can
go
back
over
here
to
our
openshift
console.
We
can
go
once
again
import
yaml,
basically
saying
we
want
to
import
the
redhead
clp
chat.
Repository
give
it
the
url,
where
my
charts
are
located
and
click
on,
create,
we'll
give
it
a
moment
and
then,
when
it's
gotten
a
chance
to
pull
down
the
charts,
we
can
go
back
to
the
add
button.
C
This
is
just
yet
another
example
of
how
openshift
really
enables
you
to
extend
the
functionality
to
suit
your
own
needs
and
the
ability
to
leverage
custom
hell
and
chart
repositories
is
just
one
of
the
customizations
that
are
available
in
openshift
and
finally,
before
I
conclude,
I
wanted
to
talk
about
the
ability
to
customize
the
openshift
environments
and
especially
with
a
lot
of
my
customers,
the
ability
to
customize
it
to
suit
their
own
needs
is
a
large.
C
You
know,
there's
a
lot
of
interest
around
that,
because
it
kind
of
makes
developers
feel
a
little
bit
more
home.
It's
not
just
some
off-the-shelf
product,
they
went
ahead
and
acquired,
they
can
kind
of
tune
and
flavor
it
and
openshift
provides
many
different
ways
that
you
can
customize
the
openshift
console
and
to
add
a
little
flair
to
that.
The
developer
tools
team
is
very
much
interested
into
how
you
are
going
ahead
and
customizing
the
openshift
console,
and
to
do
so,
we
have
a
little
bit
of
a
competition
going
on.
C
So,
if
you're
interested
in
entering
it,
you
have
the
ability
to
win
some
prizes
because
who
doesn't
love
prizes.
It's
the
holiday
season,
the
season
of
giving
the
red
hat
developer
team
is
going
to
give
you
the
ability
to
win
your
own,
not
only
swag,
because
everyone
loves
swag,
but
the
ability
to
even
enter
and
appear
on
openshift
tv
on
a
broadcast
just
like
this
in
a
in
the
future.
C
I
hope
you
had
a
chance
to
have
some
fun
learning
about
how
to
deploy
helm,
charts
on
openshift,
be
able
to
leverage
a
helm
chart
using
quarkus,
as
well
as
seeing
some
of
the
ways
that
you
can
actually
use.
Supersoft
supersonic,
subatomic
java
to
quickly
and
easily
spin
up
a
application
on
openshift
thanks.
A
A
Exactly
so,
do
we
have
any
questions?
I
did
see
that
daniel
asked
if
you're
using
quarkus
0.0.1,
I
saw
the
chart
with
0.0.1,
but.
C
We
we
are
one
thing
because
I
know
actually
daniel's
ping
me
on
the
side.
The
one
thing
you
need
to
make
sure
that
you
do
is
you
need
to
specify
the
location
of
the
get
repository
he
asked
you
know
when
he
tried
displaying
the
chart,
as
it
was
out
of
the
box.
He
was
getting
errors
because
he
didn't
specify
the
the
actual
git
repository
of
the
chart
itself,
so
I'm
just
going
to
go
share
my
screen
really
fast.
That
was
really
important
again
I'll,
just
kind
of
walk
through
that.
C
C
B
C
And
and
that-
and
that's
important
too
often,
as
you
mentioned,
that
because
I
think
that's
what
he
was
running
into-
is
that
the
console
displayed
that
too
now
you
may
be
interested
or
you
may
have
an
existing
quarkus
image
that
was
already
available.
You
don't
need
to
build
it
in
openshift.
There
is
the
functionality
within
the
helm
chart
itself
to
not
build
an
application
and
just
go
ahead
and
leverage
one
that
already
exists.
That
functionality
is
there
too?
A
A
C
So
you're
thinking
more
of
you're
coming
from
a
more
enterprise
application,
server
type
deployment,
and
you
want
to
leverage
more
of
a
lightweight
microservices,
with
focus
being
able
to
that
that
also
that's
a
bit
of
a
loaded
question.
Only
because
does
the
application
require
a
large
amount
of
refactoring,
that's
and
then
what
you
want
to
do
is
once
you
do
that
and
have
an
assessment
of
your
application.
C
You
want
to
then
want
to
use
a
domain
driven
design
determine
what
your
bounded
contexts
are,
determine
what
can
be
broken
up
into
smaller
chunks
and
then
from
that
go
ahead
and
be
able
to
deploy
it.
But
what
I
would
do
is
if
you're
just
getting
into
corcus
start
small.
Don't
try
to
boil
the
ocean,
try
to
take
some
simple
use
cases
if
you
have
a
simple
application
in
your
organization,
use
that
as
a
first
step
to
get
familiar
with
corkis
and
then
be
able
to
build
up
from
there
austin
do
you
have
any
thoughts.
B
A
B
Memory
usage
a
little
bit
slower,
startup
time,
but
overall
higher
throughput,
so
you're
going
to
have
to
decide
which
one
is
is
best
for
you.
I
would
probably
start
with
jvm
just
to
get
started
because,
like
you
said,
you
know,
start
start
small
and
I
think
that
jvm
is
a
little
bit
easier
to
get
started
with.
That's
just
my
opinion,
but
you're
gonna
have
to
make
that
decision
at
some
point.
C
A
So
I
think
it's
a
good
time
to
also
show-
and
you
put
it
on
the
resources
slide
for
corcos,
but
this
is
such
a
good
resource
to
start
coding
with
quarkus
and
going
through
and
being
able
to
pick
your
extensions,
and
you
know
what
you
want
to
do.
I
don't
know
do
either
you
want
to
talk
about
this
briefly.
B
Yeah
this
is
very
similar
to
like
the
spring
initializer.
If
you're
familiar
with
that,
it's
a
very
simple,
just
kind
of
point,
point
and
click
interface.
Tell
it
tell
it
what
what
extensions,
what
dependencies
you're
interested
in
what
your
requirements
are.
Once
you
click
generate
your
application
it'll
actually
send
you
a
zip
file
of
just
kind
of
a
basic
skeleton
code
with
a
pawn.xml
there,
and
you
know
some
basic
skeleton
files.
C
Basically,
what
I
use
to
get
started,
even
if
I
already
have
an
application,
I
know
I'm
going
to
develop.
I
always
start
with
this,
because
it
will
usually
make
my
life
a
little
easier
to
get
started.
I'll
have
a
lot
of
the
features
out
of
the
box,
and
I
can
go
ahead
and
add
that
field
of
cli
but
might
as
well
have
it
do
it
for
me.
C
A
So
daniel
sent
this
over
appreciate
it.
Now
I
need
to
go.
Read
it
four
reasons
to
try
quarkus,
go
download
that
we'll
add
it
into
the
resources
slide
and
post
that
as
well.
Awesome
thanks
daniel.
A
And
more
resources,
thank
you.
Doug
user
stories
on
quarkus,
we'll
add
those
in
too
I
love
user
stories.
I
mean,
what's
the
point,
if
we
don't
know
how
to
use
it
or
why
people
are
using
it
right.
C
I
think
it's
more
of
the
the
the
the
battle
tested.
You
know
whether
it's
going
to
be
the
lessons
learned,
the
successes
and
like
all
technology
there
will
be
scars
battle.
Scars
are
the
best
kind,
because
you
learn
the
most
from
them.
It
makes
you
a
better
developer,
coder
and
just
maintainer
of
the
project.
A
C
Yeah
we're
very
interested
in
just
the
helm
experience
with
openshift.
You
know
we
showed
off
some
of
the
functionality.
That's
part
of
the
product
as
it
is
today.
How
can
we
make
it
better?
What's
working
well,
what's
not
working
well,
we
want
to
hear
your
feedback,
because
how
can
we
make
it
better?
If
we
don't
know
it's
like
the
tree
falling
in
the
forest
and
no
one's
around.
A
Right
exactly
now,
daniel,
I'm
definitely
going
to
pick
on
you
because
you've
been
playing
with
it.
Do
you
as
you've
been
going
through
it,
creating
helm,
charts
or
playing
with
the
helm
charts?
Have
you
seen
anything
any.
A
A
A
A
B
Yeah
thanks
a
lot
everybody
and
if,
if
you
have
any
questions
or
just
issues
that
show
up
without,
I
definitely
leave
a
link
on
the
red
hat,
charts
repo
actually
karina.
I
did
not
include
that
in
the
slides,
so
we
might
want
to.
We
want
to
throw
the
link
to
the
red
hat
charts
get
repo
in
there,
so
people
can
contribute
and
no.
C
A
And
thanks
for
joining
us
on
this
holiday
week,
at
least
for
the
u.s
and
we'll
be
sending
these
out
to
the
youtube
channel
and
to
the
slideshare,
and
thank
you
so
much
everybody
for
joining
us.