►
From YouTube: State of Platform Services | Rob Szumski William Oliveira | OpenShift Commons Gathering
Description
State of Platform Services Rob Szumski Siamak Sadeghianfar William Oliveira OpenShift Commons Gathering
A
Today,
we're
gonna
be
talking
about
the
platform
services
roadmap
for
OpenShift,
but
before
we
begin,
I
just
want
to
talk
about
open
ship
for
in
general
for
a
few
seconds
to
set
some
context
with
overture
for
we
reimagine
what
a
cloud
native
kubernetes
tech
stack
looks
like
from
how
its
installed
bringing
things
like
immutable
infrastructure
into
the
mix
and
adding
operators
to
manage
the
entire
state
of
the
cluster,
like
a
team
of
operations.
Superheroes
starting
with
this
theme
of
automation
and
day2
management
top
of
mind
means
that
the
cluster
is
really
smart.
A
A
Keeps
pushing
innovation
up
into
the
tools
that
your
developers
are
using
when
you've
got
a
really
stable,
really
dynamic
cluster.
You
can
make
it
really
easy
for
cluster
admins
to
push
up
a
stream
of
updates
for
things
like
service
mesh
pipelines
and
server
lists,
new
versions
of
kubernetes
gaining
all
the
innovation
that
happens
in
that
upstream
community
that
innovation
is
baked
into
every
layer
of
openshift
and
it
forms
a
holistic
platform
for
running
your
hybrid
cloud.
A
Now
today,
we're
going
to
focus
on
these
blue
boxes,
the
cluster
services
that
enable
your
development
teams
to
write
code,
ship
software
and
Manager
application
stacks,
but
other
sessions
covered
the
other
boxes
in
great
detail,
so
check
those
out
on
the
agenda.
Now,
when
we
talk
about
the
application
stack,
we
mean
one.
That's
born
in
a
cloud
native
world
pools
like
stok
native
and
Tecton,
empower
your
applications
to
take
advantage
of
the
new
automation
that
the
infrastructure
can
provide
and
producing.
A
Apps
that
are
supercharged
in
this
new
paradigm
does
require
new
tools
like
eclipse,
che
Korkis
and
the
operator
framework
super
excited
to
see
what
you
build.
So,
let's
dive
in
and
see
how
this
stack
is
the
perfect
match
for
OpenShift
now
platform
services
are
some
of
the
key
value
adds
to
the
platform,
and
what
this
does
is
tie
these
features
into
your
workloads.
So
they
have
the
unique
capability
to
the
platform
provides.
A
So
things
like,
we
talked
about
earlier
service,
mesh
pipelines
and
server
lists,
as
well
as
other
capabilities
like
doing
user
tracking
and
charge
back
across
the
platform,
as
well
as
have
all
of
your
applications,
grant
access
to
our
full
stack
of
the
logging
infrastructure.
So
we're
going
to
dig
into
each
one
of
these
in
a
little
bit
more
detail.
A
Tracking
usage
across
a
cluster
is
important
for
every
business,
but
especially
in
multi
tenant
clusters.
Open
ships,
metering
operator
allows
for
cluster
administrators
to
schedule,
reports
and
track
usage
for
CPU,
RAM
and
other
metrics,
as
your
developers
are
consuming
resources
inside
of
their
namespaces.
A
What's
exciting
about
metering?
Is
it's
just
a
really
great
piece
of
technology
as
a
general
usage
collector
and
it's
very
unappealing
ated
about
how
that
data
is
used?
This
makes
it
perfect
for
plugging
it
into
business
intelligence
tools
that
your
company
might
run
or
other
specific
workflows
that
impact
how
your
business
units
and
your
teams
are
set
up.
You
can
use
metering.
B
B
A
Collector
for
two
of
our
products,
Red
Hat
cost
management
and
the
Red
Hat
marketplace.
Let's
dig
into
both
in
more
detail
the
Red
Hat
cost
management
tool
combines
Aya's
users,
data
with
Red
Hat
subscription
usage
to
give
your
company
visibility
into
your
spend
and
map.
That's
been
to
different
teams
and
projects
that
are
underway.
This
is
a
hosted
service
that
works
across
multiple
clusters
and
multiple
clouds,
bringing
you
that
true
hybrid
capability.
A
Procurement
teams
also
can
simplify
approvals
track
application
usage
from
cluster
cluster
or
team
dat
team
to
make
sure
that
that
usage
remains
within
their
desired
levels.
Operators
remain
very
popular
with
open
shift
users,
and
this
year
will
bring
enhancements
to
the
experience
of
building
operators
and
managing
operators
as
an
administrator.
The
first
big
improvement
is
unifying
the
object
model
into
a
single
operator
object
which
will
help
users
of
the
cluster
discover
operators,
but
also
a
developers
and
testing
them,
especially
combined
with
our
new
ability
to
bundle.
A
Custom,
functional
tests
together
also
updated,
is
a
new,
simplified,
sim
ver
based
upgrade
logic,
which
will
join
our
more
advanced
update
graph
capabilities
that
exist
today
to
match
those
really
complex
needs,
as
well
as
very
simplistic
needs,
improving
our
tools
for
building
customized,
catalogs
and
mirroring
that
content
and
offline
environments
is
both
useful
for
operator
developers
and
cluster
admins.
This
will
make
it
easier
for
developers
to
test
their
upgrade
paths
and
register
in
progress
operator
catalogs.
You
can
picture
like
a
nightly
catalog.
A
For
example,
cluster
admins
will
very
easily
be
able
to
bundle
specific
versions
of
operators
instead
of
grabbing
the
entire
latest.
Catalog,
this
change
will
be
great
for
improving
mirror
times
into
disconnected
environments
as
well
as
better
curation.
Overall,
you
want
to
get
your
developers
just
the
sets
of
tools
that
you
want
them
to
and,
lastly,
we're
very
excited
to
bring
the
operator
framework
into
the
CNC
F
as
a
sandbox
project.
We've
been
hard
at
work
on
this
and
we're
very
excited
to
see
more
innovation
in
community
engagement
as
we
go
with
the
CNCs
backing.
C
You
rob
now,
let's
talk
about
open
shift
service
mesh,
as
some
of
you
might
already
know.
A
service
mesh
is
a
dedicated
infrastructure
layer
for
handling
service
to
service
communication.
It
is
responsible
for
the
reliable
delivery
of
requests
to
a
complex
topology
of
services
that
usually
comprise
a
modern
cloud
native
application.
In
practice,
the
service
mesh
is
typically
implemented
as
an
array
of
lightweight
network
proxies
that
are
deployed
alongside
an
application
code,
without
the
application
needing
to
be
aware,
after
leveraging
a
pattern
called
sidecar
containers.
C
As
we
continued
the
development
of
weapons
of
service
mesh,
we
are
increasing
the
flexibility
of
how
our
customers
manage
their
systems
and
how
we,
as
RedHat,
can
aid
with
that.
At
the
beginning
of
April,
we
released
version
1.1
of
open
shooter
smash.
This
release
is
backwards,
compatible
back
to
two
versions
of
openshift.
Through
the
operator
managing
stall,
we
can
introspect
the
cluster
version
and
perform
the
correct
steps
to
ensure
you
have
an
working
system
in
this
newest
release.
We
have
a
number
of
exciting
things
to
talk
about.
C
Here
is
an
example
of
one
of
the
new
exciting
features
in
key
ally:
the
ability
to
drill
down
into
charts
when
you
see
something
interesting
to
inspect,
but
beyond
those
high
level
details,
there
are
a
number
of
aspects
of
issue
which
are
specifically
exciting
to
talk
about
for
users
who
are
allowing
Citadel
to
fully
provision
and
manage
the
internal
certificate
authority
from
you
to
TLS.
The
set
up
is
even
more
streamlined
beyond
that,
we
can
also
monitor
and
reconcile
issues
with
expiration
of
the
underlying
root
certificate.
C
There
are
also
a
number
of
verbs
that
have
been
added
to
the
easter,
CTL
or
Easter
control,
tooling,
which
can
provide
improvement
for
the
troubleshooting
of
misconfigurations.
In
the
context
of
traffic
management,
we
have
new
items.
The
new
authorization
policy
mechanism
has
now
graduated
to
a
beta
status,
while
of
course,
there
are
details
to
be
worked
out.
This
will
move
us
towards
a
better
granular
control
over
how
the
users
can
engage
with
services
running
in
the
mesh
users
of
traffic
mirroring.
C
A
very
popular
feature
also
called
dart
launches,
we'll
be
excited
to
hear
that
now
they
can
send
a
quantum
of
the
traffic
to
a
service
rather
than
the
whole
copy
of
all
the
packets.
This
will
provide
developers
and
site
administrators
a
way
to
test
their
code
earlier
and
more
often
leading
to
great
service
reliability.
C
Openshift
service
service
workloads
are
increasing
in
popularity
for
cloud
and
on-premise
deployments.
We
have
big
service
capabilities
into
the
platform
with
openshift
surplice
that
enables
almost
any
containerized
application
to
run
a
service.
That
means
you
can
choose
any
programming
language
of
choice
and
enable
auto
scaling.
Behavior
is
scaling
up
to
meet
the
demand
and
scale
down
even
to
zero.
Saving.
Resources
beyond
doubt
is
scaling
for
HTTP
requests.
C
You
can
trigger
those
turn
plus
containers
from
a
variety
of
hidden
sources
in
receive
events
such
as
Kafka
messages,
file,
uploads
to
a
storage,
timers,
recurring
jobs
and
more
than
a
hundred
other
event,
sources
like
Salesforce
serves
now
an
email
all
powered
by
chemica
OpenShift
servers
is
based
on
the
open
source
project,
a
native
one
of
the
fastest
growing
service
projects
in
the
market.
This
ensures
that
you
don't
suffer
with
locking
concerns
and
can
still
get
the
innovation
from
a
growing
open
source
community.
C
Let's
look
at
the
surveillance,
operational
benefits
without
service
containers.
You
eventually
have
to
do
with
one
of
these
two
problems
over
provisioning.
When
you
have
too
many
containers
running,
an
IT
has
to
eat
the
cost
of
running
those
idle
resources
or
under
provisioning,
where
you
have
more
requests
than
the
number
of
provision
containers
which
essentially
leads
to
a
poor
quality
of
service
or
even
lost
business
revenue.
C
When
you
miss
those
critical
transactions
with
surplus
containers,
though,
the
number
of
containers
try
to
measure
the
Med,
as
you
can
see
in
the
picture
here
in
the
right,
it
saves
time
and
cost
on
your
IT
department,
creating
a
more
direct
line
between
IT
costs
and
business
revenue.
The
availability
of
markup
St
the
system
also
helps
you
increase
the
density
of
our
clusters,
allowing
customers
to
run
more
on
patience
with
the
infrastructure
they
already
have.
C
The
user
experience
of
open
shifts
turbulence
is
at
the
heart
and
center
of
open
shift
on
the
console.
You
can
visualize
your
surplus
applications,
the
services
that
contribute
those
containers
and
set
the
traffic
distribution
for
multiple
versions
of
an
application.
The
installation
is
done
through
an
operator
that
enables
a
great
day
one
experience
in
an
even
better
day
to
delivering
over-the-air
updates
with
bug,
fixes
and
CVS
that
can
be
applied
automatically.
C
You
can
also
leverage
the
CLI
K
in
the
official
CLI
of
K
native
to
create
applications
and
even
sources
for
those
applications
with
a
single
command.
You
get
a
service,
deploy
and
URL
to
access
the
service
within
a
couple
seconds.
The
time
seen
here
illustrate
the
creation
of
a
service,
a
route,
a
deployment
and
the
download
of
the
container
image
in
the
cluster,
which
is
pretty
fast.
B
Thank
You
William,
the
next
piece
I
want
to
talk
about
is
open
shift,
builds
the
next
service
and
platform
services.
It's
an
API
that
allows
you
to
build
lean
images
from
application,
source
code
and
binary
on
kubernetes
itself.
So
you
can
create
really
slim
images.
Let's
say
you
have
a
Java
project,
Java
maven
project,
and
you
want
to
build
this
java
application
from
source.
But
you
don't
want
all
the
build
tools
to
be
included
within
the
the
resulting
image.
B
Your
runtime
image
or
ultimate
image
should
be
as
slim
as
possible,
including
the
minimal
dependencies
like
Java
runtime
environment
and
there
your
application
binary.
So
through
this
API,
we
can
trim
down
and
cut
out
all
those
dependencies
and
keep
them
limited
to
during
the
build
time
and
produce
images
that
are
really
lean.
They
all
include
appendices
that
are
required
at
runtime.
It
also
opens
the
space
for
using
any
of
the
kubernetes,
build
tools
that
that
you're
familiar
with
and
are
quite
popular
in
the
kubernetes
community.
B
You
might
be
using
builder
for
doing
docker
file
bills
or
source
to
image.
If
you
want
to
automatically
turn
the
source
code
into
a
binary
image,
cloud
native,
build
packs,
Kanaka,
jib
and
an
auto
tool,
so
it
provides
of
quite
access,
accessible
API,
with
supporting
build
strategies
that
are
available
in
the
communities
community
and
at
the
same
time
it's
it
has
a
very
pluggable
architecture.
So
you
can
extend
it
to
more
custom
strategies
that
you
might
be
using
within
your
organization,
because
you
have
a
special
needs,
a
lot
of
our
customers.
B
It
is
based
on
C
Rd,
so
you
would
install
the
operator
or
any
kubernetes
and
you
can
consume
the
same
API
or
on
the
same
bill,
regardless
of
what
kind
of
built
tool
you're
using
and
open
ship
builds.
We
have
just
recently
started
with
this
project.
It's
gonna
be
Developer,
Preview
and
open
shoe
4.4,
and
we
are
back
at
the
iterating
in
the
community
on
it
to
take
it
to
GA,
hopefully
within
this
year.
B
How
does
the
bishop
build
API
work?
This
is
a
kind
of
a
simplification
of
the
internals
of
openshift
bill.
So
there
is,
there
are
a
number
of
CRS
custom
resources
that
represent
bills.
There
is
build
build
strategy
and
that
within
the
build
you
specify
will
build.
The
strategy
should
be
used.
You
want
to
use
sources
in
your
major
cloud
native,
build
packs
or
Kanaka,
or
something
else,
and
that
defines
really
did
you
stuff
had
actual
build,
should
be
done,
but
you
just
choose
a
strategy.
B
You
don't
have
to
go
deal
with
all
the
details
of
how
the
attacks,
for
example,
or
use
or
how
source
images
use
you
provide
your
git,
repo
or
application
binary
to
the
build
API
and
you'll
also
specify
what
kind
of
base
image
you
want
to
use
and
what
kind
of
butt
image
they
built
that
contains.
A
build
to
the
build
strategy
obviously
comes
with
these
good
values
for
this
right.
So
when
you
choose
source
of
image,
it
knows
for
job.
B
For
example,
it
knows
which
images
should
use
that
contains
all
the
built
tools
for
your
java
application
and
which
base
image
is
appropriate
as
your
resulting
image
right.
So
you
you
take
JDK
image
that
contains
maven
for
the
build
tool
image
and
you
would
continue
it
pick
a
very
slim
JRE
image
for
your
base
image.
Then
it
builds
application.
It
layers
your
application.
If
it's
jaar
jaar
example,
here
you
probably
have
a
jar
fault.
It
lays
it
over
the
base
image
that
you're
a
slim
produces
the
application
image.
B
You
can
as
quite
simply
change
the
strategies.
If
you
want
to
go
to
a
different
strategy
within
the
same
api,
you
don't
really
have
to
change
your
much
in
the
india
api.
It's
just
the
one
attribute
changes,
so
one
builds
does
really
abstract
in
the
way
that
you
use
these,
build
the
strategies,
build
tools
and
runs
them
on
kubernetes
and
the
provides
set
of
tool
in
your
router.
So
through
CLI
and
console
over
time.
B
You
can
interact
with
these
bills
and,
like
I,
said
it's
a
quite
a
big
inning,
but
if
you
have
a
real
launch
this
project
and
open
go
throughout
this
year.
We're
gonna
do
very
much
more
experience
and
tooling
around
it
to
help
you
in
building
images
on
the
kubernetes
platform.
Regardless
of
what
platform
you're
on
to
show
you
examples
of
how
the
API
looks
like
also
so
the
in
the
in
this
file,
you
can
see
to
build
objects
on
the
left
side.
B
It's
a
bill
that
uses
build
packs
carbon
ATP
up
at
the
right
hand,
side
you
can
see
the
same.
Application
is
being
built
by
source,
so
image
and
the
ship
bills
will
consume.
This
is
built
the
finishes
and
produce
the
image
for
you
using
the
appropriate,
build
strategy
that
is
chosen
behind
the
scene
of
ship.
Build
is
powered
by
take
tile,
so
we
were
using
Tecton
as
the
engine
that
runs.
This
builds
using
those
build
strategies
that
are
defined
the
next
here
I
want
to
talk
about,
is
open
shave,
pipelines
over
shoot
pipelines.
B
A
cloud
native
see
sed
framework
based
on
tectonic
bring
state
on
pipelines
on
open
shift.
In
other
words,
it
provides
a
kubernetes
native,
declarative,
API,
a
series
of
standard
custom
resources
define
your
pipeline
and,
more
importantly,
it
runs
its
native
to
kubernetes.
It
runs
as
isolated
container,
so
every
pipeline
runs
as
a
sail
reserve
on
the
containers
that
on-demand
are
scheduled
when
your
pipeline
executes.
The
advantage
of
that
is
that
your
first
of
all,
you
don't
have
any
central,
CI
CD
server
to
manage
its
is
it
capability
of
the
platform.
B
You
only
have
your
pipelines
and
when
you
want
it
to
run
the
run
containers,
you
don't
have
that
central
thing
to
manage
and
govern
and
have
a
central
team
to
nurture
and
upgrade
and
take
care
of
that.
And
the
second
thing
is
that,
since
your
pipelines
are
completely
isolated
from
each
other,
you
can
you
have
full
control
as
developer
teams,
your
full
control
over
these
Dover
pipelines.
What
kind
of
plugins
you
want
in
them?
What
kind
of
activities
should
happen
in
them?
B
If
you
want
to
operate
the
JDK
version
that
is
used
within
your
pipeline,
it
does
not
affect
anyone
else
or
anyone
else
by
point.
If
you
want
to
use
a
certain
plugin
the
same
aspect,
these
are
some
of
the
issues
that
a
lot
of
our
customers
have
run
into
and
they're,
using
more
centralized
of
traditional
CI
CD
servers
that
are
that
already
use
takedown
is
standard
on
right.
B
B
You
can
see
a
couple
of
screenshots
on
the
slide
of
how
at
town
is
exposed,
whether
through
the
various
tooling
that
exists
in
openshift.
You
can
see
the
diagram
of
the
pipeline,
the
connection
of
that
to
the
projects
that
you
have
deployed
or
application.
You
have
deployed
in
topology
the
logs
of
the
pipelines
right
within
the
console
or
with
maybe
a
school.
They
can
get
a
visualization
and
code
completion
and
help
you
after
the
pipeline.
B
The
next
one
of
services
I
want
to
talk
about
our
application
services.
Application
services
are
open,
ship
or
services
that
help
developers
create
cloud
native
applications
go
to
platform,
taking
advantage
of
this
building
blocks,
instead
of
creating
everything
from
scratch.
There's
a
large
series
of
programming
languages,
databases
and
different
type
of
middle
verb
and
a
lot
of
services
from
Red
Hat
partners
more
than
150
services
that
are
available
as
operators.
So
you
can
consume
the
services
when
you're
building
application
and
a
lot
of
their
since
there
are
operator
base.
B
They
behave
like
manage
services,
so
you
don't
even
have
to
manage
operation
of
the
services
yourself,
but
just
take
a
look
at
what
what
services
are
available.
So,
let's
talk
about
languages
and
wrong
plans
that
are
available
on
the
platform:
a
collection
of
programming,
languages,
runtimes
and
databases
come
built
in
and
in
open
ship.
These
are
all
like
supported
images
that
officially
get
shipped
as
a
part
of
open
shift,
and
if
you
have
the
application
based
on
this
program,
the
programming
languages
like
Java
node
is
Python
and
so
on.
B
You
can
immediately
using
those
build
technologies
that
I
mentioned
earlier
start
building
images
office
application
deployment
platform.
There
are
also
series
of
databases
that
are
delivered
with
the
platform
that
they
can
use
them
for
within
your
development
and
testing
environment,
to
deploy
and
use
them
in
your
building
application.
At
the
same
time,
this
series
of
wrong
time,
obviously
like
HTTP
server,
is
quite
popular
for
serving
static
content
or
PHP
application.
B
Like
if
you
have
traditional
monolithic
applications,
Java
Enterprise,
that
you
want
to
move
to
containers
or
you
are
creating
integrating
your
micro
services
with
back-end
services
that
are
more
traditional
or
legacy
application
across
the
organization
or
automating
business
processes
or
business
rules.
Red
Hat
middleware
provides
a
very
rich
collection
of
middle
there
based
on
JBoss
technology
is
open,
Liberty
and
all
the
cloud
packs
that
come
from
IBM
that
you
can
use
on
the
platform
and
enrich
the
application
that
you
are
building
and
beyond
that.
B
Obviously
we
have
the
partners
or
have
partners
that
are
building
application
and
services
for
an
open
chief.
They
are
a
majority
of
them,
are
exposed
within
the
operator
hub.
When
you
go
through
operator,
harbin,
often
ship,
you
would
see
a
white
categorization
of
different
type
of
services
that
enable
DevOps
or
the
data
services
or
databases,
security
and
Orsi
ICD,
and
so
on
that
it
can
you
deploy
and
a
lot
of
them.
Like
I
said
they
were
honest
operators,
so
they
behaved
like
manage
services.
B
You
don't
have
to
operational,
take
care
of
these
application
services
and
use
them
within
your
application
and
deploy
them
on
open
ship.
One
of
the
other
additions
to
the
platform
is
the
application
binding
operator
which
allows
us
to
connect
partner
applications
in
the
application
that
is
backed
by
an
operator
to
your
applications.
So
it
add
credentials
for
whatever
that
operator
is
provisioning
for
you
being
a
database
or
message
queue
or
something
else
can
be
automatically
injected
into
your
application
and
consumed
there
as
environment
variables
or
secrets
or
other
ways.
B
It
is
a
similar
model
to
how
an
open
service
broker
function
before
service
catalog.
Four
people
are
familiar
with
that
to
make
it
available
for
operator
services
as
well.
The
powerful
thing
about
is
that
it
works
based
on
labels.
So
if
you
keep
really
pulling
your
application
frequently,
which
is
what
you
would
do,
is
specially
we're
going
to
develop
an
environment
or
if
you
have
a
high
velocity
of
deployment
in
production,
the
labels
would
matched
plumbing
again
and
we
inject
those
credentials
into
the
new
deployment.
B
So
you
wouldn't
need
to
really
do
anything
to
get
access
to
those
operator
back
services.
They
would
be
available
in
a
new
to
continue
image
or
new
container
at
deploy
as
well
under
the
hood.
This
is
powered
by
kubernetes,
CR
DS,
and
it's
easily
integrated
into
your
tools
like
the
rest
of
coordinators.
B
Essentially,
who
needs
objects
and
it
works
across
any
kubernetes
object,
so
you
can
bind
it
to
a
pod
or
even
like
a
deployment
or
we're
working
even
to
make
those
credentials
available
just
as
a
secret
sorry,
so
the
platform
doesn't
have
to
dictate
how
you
want
to
consume
the
credentials
from
these
operator
back
services.
You
can
consume
it
in
any
way.
You
want,
for
example,
in
this
CI
CD
slow.
B
Possibly
the
application
by
the
operator
is
available
today
in
the
arcade
or
haben
authorship,
so
we
can
go
through
the
admin
console
operator
hub
and
install
and
start
using
and
the
last
category
of
service.
I'll
talk
about
or
developer
services
developer
services
are
all
focused
on
the
developers
day-to-day
life,
so
there
are
a
set
of
tools
that
help
developers
be
productive
on
the
platform,
but
also
how
to
add
a
package
and
deploy
our
application
on.
They
can
onboard
their
application.
That
says,
that
starts
with
developer
console.
B
Obviously,
the
developer
perspective
within
the
open
ship
console
that
focuses
on
an
application
view
rather
than
kubernetes
constructs.
There
is
helm
for
a
package
and
installing
application.
There
is
a
developer
focus
CLI
as
well
for
fast
development
iterations
on
your
workstation,
and
you
want
to
modify
code
deployed
quickly
within
the
container,
and
you
want
to
skip
building
the
image
every
time
and
which,
which
is
usually
a
headache
when
you
have
to
do
it
like
every
every
minute
or
every
couple
of
minutes
have
to
do
that
for
hundreds
of
times
a
day.
B
Developer
CLI
reduces
that
pain
by
make
it
really
iterative,
and
when
you
make
it
change
changing
the
code,
it
would
just
sync
it
inside
a
container
and
update
the
applications.
You
can
test
the
result
of
your
change
immediately.
There
are
the
IDE,
plugins
and
Visual
Studio
code,
extensions
that
we
created
around
operation
different
technologies
in
openshift,
so
that
developers
can
interact
with
the
platform
right
within
where
they
are
cooee
you're
not
to
leave
there.
The
coding
platform
coded
containers
gives
developers
a
local
instance
of
open
shape,
so
they
can
write
on
their
own
laptop.
B
They
don't
have
to
be
dependent
if
they
are
an
offline
environment
or
maybe
online,
but
I
want
to
have
to
control
a
single
instance
on
your
laptop,
and
there
is
code
really
workspaces
that
gives
a
more
collaborative
cover.
Native
makeup.
Cuban
is
native
web
based
IDE,
so
there's
a
rich
set
of
the
old
pro
tools
and
services
combat
open
ship
to
make
the
life
simpler
for
for
developers
on
on
open
ship
in
kubernetes
and
making
more
focus
on
the
code
rather
than
kubernetes
aspects.
B
And
why
do
we
work
on
this
particular
set
of
tools?
We
look
at
the
development
process
from
an
Internet
perspective,
really
from
the
whole
development
process
of
doing
the
code
and
debugging
on
your
local
environment,
building
and
package
again
and
running
the
application,
and
when
the
developer
is
comfortable
with
it
and
commit
that
to
the
git
repository,
run
it
through
CI
CD
and
deploy
to
opera
environment.
B
So
the
series
of
tools
that
we
provide
and
work
on,
we
try
to
make
sure
that
complexity
of
complexities
of
interacting
with
OpenShift
and
kubernetes
or
address
through
each
of
these
phases
that
developers
and
code
go
through
and
make
life
easier
for
them
in
their
interactions.
With
the
platform
I
mentioned
developer
console
to
give
you
a
little
more
in-depth
view.
So
within
the
council
there
are
two
perspective.
There
is
admin
perspective,
the
focus
on
kubernetes
administrative
concepts,
and
there
is
developer
perspective
that
focuses
on
end-to-end
flows
around
applications.
B
So
we
work
really
iteratively
hard
on
this
piece
of
the
product
is
well
to
to
address
the
needs
of
developer
on
kubernetes,
to
to
make
life
really
easy
for
them
and
do
not
force
them
into
kubernetes,
construct
or
directory
kubernetes
way
of
thinking,
if
they
don't
want
to,
of
course,
follow
to
this.
There
is
admin
console
that
allows
developers
that
want
to
be
in
a
kubernetes
environment
to
interact
with
the
platform
more
from
kubernetes
perspective
and
the
kubernetes
objects.
B
Helm
3
is
another
addition
to
the
platform
on
open
ship
that
is
full
of
supporters
from
will
construe
4.4.
It
allows
you
to
package,
install
and
update
your
applications
on
on
openshift.
It
is
why
do
you
use
it?
A
lot
of
customers,
a
lot
of
teams
already
build
him
charge
for
deploying
their,
and
it
is
fall.
Ham
3.
One
of
the
advantages
that
it
has
is
that
the
tiller
component
is
gone
so
you're
on
fully
client-side
model.
B
There
was
a
helm
CLI
that
comes
with
a
platform
you
can
take
any
helm
chart
and
provide
the
values
and
configuration
customization
that
you
want
to
lay
over
at
home,
chart
and
deploy
it
on.
The
platform
is
true,
its
releases
and
the
deployment
of
your
your
application.
The
good
thing
about
a
hill
chart
and
hail
3
is
that
it
follows
kubernetes
our
back
right.
So
you
you
have
the
same
controls
and
security
that
you
have
like
the
rest
of
your
application.
B
You
can
have
it
on
on
a
helm
and
the
charts
that
get
deployed
on
within
your
namespaces.
We
also
bring
helm
to
the
surface
within
the
developer
console,
so
you
can
see
he'll
charge
within
the
developer.
Catalog
developer
catalog
in
the
developer
console
is
where
you
find
content
to
deploy
different
type
of
services,
application
services
that
we
have
been
talking
about
and
once
you
deploy
the
chart,
you
will
see
releases
appear
as
a
part
of
the
in
helm,
navigation
item.
B
You
also
see
them
exposed
in
topology,
so
you
would
be
able
to
interact
with
him
charge
directly
within
the
console,
in
addition
to
him,
CLI
and
other
tooling.
That
is
available,
of
course,
within
the
helm,
ecosystem
code,
review,
workspaces
or,
like
briefly
mention,
is
a
web
based
developer
workspace.
So
it
runs
on
kubernetes
on
open
chef
and
gives
you
a
complete
stack
of
what
you
need
to
start
developing
your
application.
B
We
can
create
canned
workspaces
and
that
that
provide
a
very
familiar
experience
like
vs
code
and
once
you
click
and
create
that
workspaces
gives
you
all
the
tools
that
you
have
predefined
in
a
git
repo
for
what
you
need
for
the
application.
You
need
a
certain
version
of
maven
to
build
your
application
or
you
need
Java
support.
You
need
maybe
a
port
on
CAD
or
EAP,
or
some
some
other
application
server
and
decide.
B
So
you
define
everything
that
you
need
and
within
that
stack
or
workspace,
and
they
put
that
in
get
and
any
developer
that
gets
on
board,
or
maybe
you
work
from
different
workstations
or
from
home
or
work
from
from
part
of
organization
or
your
team
is
distributed.
They
all
have
access
to
the
exact
identical
stack.
It
can
create
it
within
the
browser
and
start
developing.
It
is
based
on
Eclipse
chair,
and
it
gives
you
an
additional
space
that
complements
your
local
workstation
within
your
own,
laptop
or
desktop
environment
that
you
develop.
B
What
I
would
like
to
finish
with
is
a
quick
overview
of,
along
with
the
capabilities
that
we
have
planned
to
add
to
the
platform
and
a
cross-platform
services,
application
services
and
developer
services
throughout
this
year
toward
the
second
half.
There
are
many
capabilities.
I
won't
have
time
to
go
through
all
of
that,
but
we
are
working
hard
across
all
the
steams
to
deliver
the
capabilities
that
our
customers
are
asking
so
that
it
helps
them
to
manage
these
workloads.