►
Description
The Red Hat's Technical Product Managers review what to expect in Red Hat OpenShift 4.12.
1:42 Presentation starts
2:14 Overview of 4.12 major changes
6:22 Spotlight features
21:45 Console
22:44 Developer experience
A
A
A
For
openshift
4.12,
thanks
for
joining
from
wherever
you
are
in
the
world
and
whatever
time
zone
you
might
be
in
you
know
you
are
here
with
me
and
my
peers
and
teammates
on
the
openshift
product
management
team,
and
we
are
very
pleased
to
bring
you
a
what's
new
in
openshift
4.12.
without
further
Ado.
Let
me
go
to
the
next
slide.
You
know
with
openshift
4.12.
We
have
packed
it
with
the
features
and
capabilities
that
our
customers
and
partners
have
asked
for.
A
We
strengthen
our
core
here
and
our
security
as
well
as
we
have
some
exciting
things
to
talk
about
at
the
edge,
while
also
ensuring
that
there
is
operational
and
scale
capabilities
across
all
these
pillars,
and
we
will
be
talking
about
each
one
of
these
in
detail
in
this
Spotlight
section,
which
is
coming
up
very
soon
so
stick
around.
A
But
that
said,
kubernetes
1.25
is
what
openshift
4.12
is
based
on,
and
you
will
see
here.
A
number
of
you
know
the
kubernetes
and
the
Upstream
Community
obviously
continues
to
add
a
number
of
capabilities
for
our
customers.
You
know
kudos
to
all
the
contributors
from
Red,
Hat
and
elsewhere
to
make
this
successful
things
to
call
out
really
are
I
mean
you
will
see
so
many
kind
of
security
related
topics
continuing
to
see
attention
and
there
is
username
spaces.
There
is
checkpoints
for
forensic
analysis.
A
There
is
the
port
security
admission,
which
is
the
new
one
and
the
old
one.
The
support
security
policies,
which
has
said
okay
so
far,
is
actually
being
removed
in
this
and
so
on
and
so
forth.
So,
in
a
lot
of
of
exciting
things
that
a
blog
post
around
this-
and
you
will
see
some
of
this
mentioned
in
our
openshift
4.12,
so
very
exciting.
B
C
All
right
also
with
openshift
4.12
we're
introducing
some
life
cycle
changes
specifically
for
even
numbered
releases,
starting
with
openshift
4.12
we're
adding
a
six
months
extended,
update
support
phase,
taking
the
total
life
cycle
for
the
even
numbered
releases
to
24
months.
C
C
The
reason
for
doing
this
we
have
customers
and
partners,
particularly
looking
at
some
of
our
use
cases
as
we
move
towards
the
edge
who
are
struggling
with
the
Cadence
of
kubernetes
and
openshift
updates,
particularly
for
devices
or
servers
that
they
wish
to
put
into
the
field
for
long
periods
of
time
in
between
upgrades.
So
the
approach
here
with
the
eus
releases
on
or
better
us
phase
and
the
even
numbered
releases
is
to
allow
them
longer
time
time
to
so
close
in
the
field.
C
We
are
also
with
the
way
the
the
skus
work,
with
the
the
premium
offering
attachment,
and
also
with
the
standard
subscriptions
plus
the
add-on.
We
are
aligning
this
with
the
way
the
extended
update
plays
works
with
red
hat
Enterprise
Linux
as
well
important
to
note
the
EOS
to
us
upgrades
facility
that
we
added
in
48410
does
continue
with
the
same
behavior.
C
So
that
means
that,
as
as
we
upgrade
from
412
to
a
future
US
release,
you
will
still
have
to
go
through
that
interim
release
in
between
to
complete
the
upgrade,
and
also
that
layered
operators
and
operands
will
continue
to
have
their
own
life
cycles
as
well,
which
will
be
advertised
on
the
openshift
lifecycle
page
as
they
are
today.
C
C
You
know
something
like
two
cores
two
gig
of
RAM
as
a
bare
minimum,
where
we
combine
Red,
Hat
Enterprise
Linux
for
Edge
the
footprint
that
we
have
already
got
in
the
field
for
Edge
deployments
on
the
rail
side
with
microchip
just
enough
kubernetes
distribution
based
on
the
same
kubernetes
builds
as
openshift
I
optimized
around
having
a
lower
footprint
and
again,
this
is
to
address
market
demand
as
we
get
further
and
further
out
to
the
edge
and
these
smaller
devices
and
some
of
the
constraints
that
come
with
them
to
deploy,
kubernetes
and
consistent
apis
to
them.
C
So
if
we
look
at
the
red
hat
device,
Edge
technical
overview
slide
at
the
lower
level,
we
have
all
of
the
things
provided
by
Red
Hat
Enterprise
Linux
in
that
for
Edge
deployment
footprint,
and
on
top
of
that
we
are
adding
a
kubernetes
or
Microsoft
binary,
including
the
basic
kubernetes
cluster
services
and
kubernetes
orchestration.
C
Of
course,
all
in
a
way
that
is
consistent
with
openshift,
while
again
optimizing
for
that
bare
minimum
that
people
need
to
deploy
an
application
in
a
constrained
Computing
environment
at
the
edge
the
target
again,
two
cores
two
gig
of
RAM.
This
will
be
introduced
to
the
def
preview
in
4.12
with
a
view
to
moving
to
support
later
in
the
year
next
up.
I
believe
Marcos
is
going
to
talk
about
some
other
exciting
things,
we're
doing
for
Edge
Computing
more
in
the
cloud
space.
D
Sorry
I'm
just
pulling
up
my
audio
yes,
so
in
4.12
we're
making
it
easier
for
you
to
deliver
low
latency
applications
closer
to
your
end,
users
and
on-premises
installations.
And
so
what
we're
doing
is
we're
extending
open
shift
to
the
Edge
by
adding
support
for
AWS
outposts
and
AWS
local
zones
for
customer
managed
openshift
on
AWS.
The
with
outposts
customers
using
self-manage
openshift
on
AWS
can
now
provision
openshift
clusters
to
take
advantage
of
outposts.
D
E
So
we
are
very
excited
to
announce
in
openg
photo
12
the
Asian
based
installer,
which
we
say
for
this
connected
openshift
deployments
and
this
one
of
the
specializes
of
this
new
installer.
But
you
know
it's
for
any
kind
of
installation
of
openshift.
So
if
you
think
about
the
installers
that
we
have
today
takes
this
for
a
reason-
and
here
you
have
a
screenshot
of
what
you
find
when
you
try
to
install
openshift.
E
In
this
case
an
example
bare
metal,
I
would
have
interactive
workflows
web-based
we
have
automated
workflows
very
opinionated
and
we
have
a
full
control
workflows.
That's
the
UPI
that
for
some
can
be
a
bit
complex
so
with
the
Asian
based
installer.
What
we
are
trying
to
to
do
and
and
to
solve,
is
to
provide
an
easy
way
to
deploy
openshift
on-premise,
with
all
the
flexibility
that
you
have
with
the
assistive
installer,
for
example,
which
is
a
pretty
successful
installer
based
web-based
all
in
one
installer.
E
That's
part
and
integrated
in
the
openshift
install
Winery
that
many
of
you
already
know
so,
with
the
Asian
based
installer
you're,
going
to
be
able
to
install
openshift
from
a
bootable
image
that
doesn't
require
anything
else.
You
have
the
image
you
will
boot
that
image
on
the
target
hosts
and
you
will
have
by
the
end
of
the
installation,
your
openshift
cluster.
In
this
release,
we
support,
obviously
disconnected
environments.
This
is
key.
E
This
is
one
of
the
focuses
that
we
have,
but
we
will
support
bare
metal,
vsphere
and
also
platform,
agnostic
or
also
known
as
platform.
We
support
all
the
topologies
that
we
support
with
it.
That
is
a
single
or
openshift
compact
clusters,
that
is
three
nodes
clusters
with
scalable
Masters
and
it's
CLI
based
in
this
first
iteration.
E
So
that
means
that
it
allows
to
be
automated
by
third
party
orchestration
tools,
if,
if
so,
you
want-
and
it's
based
on
technologies
that
you're
already
using
like
the
assisted
service,
which
is
the
engine
behind
the
assistant
installer.
So
we
hope
you
can
try
it
in
openshift
4.12
and
with
this
I'll
pass
it
on
to
Adele.
F
Thank
you
Ramon,
so
for
hosted
control
planes
we're
going
to
continue
to
tech
review,
hosting
control
panes
on
AWS.
Additionally,
we're
going
to
preview
or
introduce
a
preview
for
bare
metal
using
the
assist
installer
flow.
We
also
call
that
the
agent
flow
and
then
we're
continuing
to
depth
preview,
Azure
and
Cube
vert,
especially
for
use
cases
like
openshift
and
openshift.
The
way
you
can
get
hosted
control
planes
is
by
going
to
the
operator
Hub
and
installing
the
multi-cluster
engine
for
kubernetes
operator.
F
Once
you
do
that,
you'll
need
to
enable
the
add-on
for
the
preview,
because
hypershift
is
or
hosted
control
planes
is
still
in
preview.
Once
you
enable
the
add-on,
the
hypershift
operator
is
going
to
be
installed
in
your
cluster
and
you're
ready
to
create
hosting,
hosted
clusters
using
hosted
control
planes
optionally.
You
can
also
use
the
advanced
cluster
manager,
Advanced
software
manager,
users
by
default,
to
relies
on
the
Multicultural
engine
for
kubernetes
operator.
F
So
if
you
look
at
the
big
picture,
as
Ramon
mentioned,
we
have
introduced
multiple
installation
flows
and
especially
what
we're
doing
was
trying
to
be
use
case,
driven
so
depending
on
your
use
case,
depending
on
your
environment
and
depending
on
how
flexible
you
want
your
deployment
to
be
you're,
going
to
have
the
option
to
choose
any
of
the
four
options
that
we
provide
at
day.
Zero,
you
can.
F
Option
that
uses
the
this
installer
with
the
standard
configuration
you
can
have
your
automated
flow
that
you're
used
to
using
IPI
the
local
Asian
base
that
Ramon
talked
about
that
is
really
useful
and
makes
a
lot
of
sense
for
air
gap
and
disconnected
environments.
And,
finally,
if
you
want
full
control
over
your
install
flow,
you
can
do
that
with
normal
UPI
installs.
F
Once
you've
done
that,
and
you
can
turn
in
like
this
cluster-
that
zero
day
Zero
cluster
into
a
hub
cluster
to
manage
a
fleet
of
clusters,
you
can
do
that
in
one
of
the
two
ways:
either
you
already
have
advanced
posture
manager,
which
allows
you
to
do
and
enforce
policies
of
scales
do
Fleet,
observability
and
more
or
you
can
go
ahead
and
install
the
multi-cluster
engine
for
kubernetes
operator,
which,
as
mentioned,
brings
along
the
hosted
control
things
feature
and
hypershift
operators
and
doing
so
you're
turning
your
cluster
into
a
hub
cluster
to
manage
multiple
Fleet
from,
and
this
cluster
is
also
going
to
act
as
a
hypershift
management
cluster.
F
Now,
once
you
have
a
hub
cluster,
you
can
start
deploying
openshift
clusters
and
spoke
clusters.
These
book
clusters
can
either
be
Standalone
using
a
system
installer
using
UPI
using
IPI,
or
they
can
be
hosted,
control
panes,
which
is
basically
using
hosted
control,
pane
hypershift
as
a
backend
for
deploying
the
Clusters,
so
yeah
you're,
going
to
be
able
to
manage
to
Fleet
using
either
stand
up
phone
or
hypershift.
Just
by
turning
on
an
operator
on
a
hub
cluster
I'm,
going
to
turn
on
to
Ali.
To
talk
more
about
the
control
and
dynamic
progress.
H
Hey
folks,
so
we're
very
excited
to
announce
that
in
4.12
Dynamic
plugins
becomes
GA.
It's
the
number
one
console
enhancement
request
is
the
ability
to
customize
the
console
in
one
fashion
or
another,
so
with
a
dynamic,
plugins
customers
and
partners
now
have
the
ability
to
customize
OCB
console
in
a
supported
manner.
The
SDK
allows
for
the
creation
of
new
pages
navigation
items,
tabs
resources
and
more
the
sky
is
truly
the
limit
with
Dynamic
plugins
to
get
started.
I
I
One
key
component
of
all
those
things
that
make
up
that
networking
ecosystem
is
the
kubernetes
cni
plugin,
so
starting
with
openshift
412,
the
default
out
of
the
box,
kubernetes
unite
plug-in
for
openshift
is
ovn
kubernetes,
and
this
is
true
for
all
new
installations
across
all
platforms,
since
apologies
with
one
exception:
IBM
Cloud,
where
it
still
is
currently
an
option,
not
the
default,
supported
since
four
six
and
with
thousands
of
production
deployments
already
out
there
in
the
field,
you'll
be
on
kubernetes
plug-in
has
already
become
the
default
for
some
of
our
apologies
and
platforms
as
listed
here
on
the
slide.
I
Obn
kubernetes
has
full
feature
parity
with
the
previous
default
cni
plugin
openshift
sdn,
but
it
adds
a
wider
array
of
features
like
IPv6,
ipsec,
hybrid
Windows,
Linux,
networking
and
Hardware
offload
capabilities.
So,
while
the
use
of
oven
kubernetes
is
not
required,
customers
that
want
to
switch
to
oven
kubernetes,
but
don't
want
a
green
field,
a
brand
new
cluster.
To
do
it.
You
can
use
a
fully
supported
procedure
in
our
product
documentation
to
migrate
their
deployment
from
the
previous
plugin
to
the
newer,
oven,
kubernetes
plugin.
I
So
what
about
clusters
that
are
still
using
the
pre
412
default?
Cni
plugin.
First,
it's
important
to
understand
that
the
previous
default
plug-in
is
not
going
away
anytime.
Soon.
Existing
deployments
using
the
older
plug-in
will
continue
to
be
fully
supported,
but
no
new
features
will
be
added
to
that
plugin.
The
new
default
plugin
only
applies
to
version
412
and
newer
clusters,
so
earlier
versions
of
openshift
will
continue
to
default
to
the
previous
cni
plugin
and
at
openshift
412.
I
I
In
addition
to
a
modernized,
cni
plug-in,
our
customers
asked
us
to
provide
more
Network
observability
in
the
product,
for
a
variety
of
use.
Cases
happy
to
say
that,
with
the
release
of
412
openshift
network
observability
is
a
fully
supported,
optional,
add-on
operator
for
all
currently
supported
versions
of
openshift,
starting
with
410
and
onward
network
observability,
is
integrated
with
the
openshift
console
and
installs
additional
Tooling
in
its
networking
sub-tab.
The
operator
uses
XDP
ebpf
based
agents
on
the
cluster
nodes
to
collect
networking
metrics,
and
it
provides
multiple
representations
of
that
data.
I
For
example
the
dashboard
tabular
and
topology
views
that
are
shown
here,
and
these
are
especially
important
to
network-minded
developers
and
administrators,
to
reduce
the
complexity
and
to
help
them
understand,
debug
and
optimize
their
Network
traffic,
focusing
on
observable
traffic
metrics
like
flow
topology
and
tracing.
You
can
really
simplify
identification
of
network
bottlenecks,
assist
with
troubleshooting
connectivity
issues
and
also
helps
optimize
Network
performance
in
openshift
clusters.
J
We
are
happy
to
announce
the
red
hat
Advanced
cluster
security
cloud
service
is
some
field
trial
and
we
are
looking
forward
to
here
for
customer
feedback
with
ACS
as
a
service.
You
install
a
minimal
software
on
your
kubernetes
cluster
and
kind
of
start
securing
it
in
minutes.
We
support
openshift
on
private
and
public
clouds,
but
also
we
support
other
kubernetes
flavors
provided
by
the
mayor
of
hypers,
forget
about
all
the
operational
overhead
and
let
red
hat,
but
hard
worry
about
that.
J
Instead,
you
will
save
time
on
provisioning,
rescaling,
doing
security
patches
over
updates
upgrades
and
backup
and
Recovery.
The
service
is
financially
backed
by
red
hat
and
you
will
receive
24x7
support,
offer
buyers,
finally,
enjoying
flexible
consumption
models
that
includes
pay-as-you-go
and
also
you
can
use.
Your
committee
spend
to
purchase
ECS
on
red
hat
AWS
and
natural
Marketplace,
and
with
this
with
close,
the
spotlight
section
and
we
move
over
with
a
lead
to
the
console
foreign.
H
Ollie,
here
again
so,
as
mentioned
previously
with
Dynamic
plugins,
we've
had
enough
a
number
of
enhancement
requests
around
customizing
the
console.
So
another
area
of
improvement
is
that
we
are
providing
a
form-based
method
to
customize
the
console
further.
This
new
feature
gives
cluster
admins
the
ability
to
configure
the
visibility
of
the
admin
and
that
perspectives.
Quick,
starts,
developer,
catalog
and
the
ability
to
set
the
default
cluster
roles
for
the
depth
perspective.
But
to
do
this,
admins
could
go
to
the
cluster
settings
in
the
administration,
navigation
area.
H
H
H
I
won't
go
through
each
of
these
RVs
here,
but
I'll
leave
this
list
for
a
reference,
and
these
are
all
the
RVs
that
made
it
into
4.12
for
ocp
console
alone.
Next
slide,
please
all
right!
So
for
the
developer
experience
we
have
so
much
content
that
we
created
a
separate
session
for
it
highlights,
are
around
enhancements
to
the
dev
console
podman
desktop
Dev
spaces
and
Odo
becoming
GA
so
for
the
Deep
dive
into
the
developer,
features
being
delivered
in
an
alongside
openshift
4.12..
H
Please
check
out
the
link
linked,
video
and
slides
at
the
bottom
of
this
slide
and
now
I'd
like
to
pass
it
off
to
James
Faulkner
next
slide.
Please.
K
All
right,
thank
you,
Ali,
so
openshift
4.12
supports
a
wide
variety
of
runtimes
and
Frameworks
for
developers
to
use
so
I
want
to
highlight
some
of
the
featured
ones
that
we
have
in
the
product
today
and
some
of
the
recent
updates.
So
the
first
one
I'll
mention
is
quarkus.
So
if
you
haven't
heard
of
quarkus,
it's
a
cubenative,
Java
framework
starts
up.
Super
fast
takes
very
little
memory
and
it's
really
complementary
to
the
way
that
kubernetes
and
openshift
deploy
and
manage
applications.
K
So
in
openshift
4.12
we
now
have
full
support
for
Java
17
for
both
traditional
jvm
apps,
as
well
as
native
executables.
Using
girl,
VM
we've
also
added
a
new
developer
UI
for
browsing
and
and
monitoring
your
Kafka
deployment.
So
if
you're
using
Kafka
but
they're
using
it,
you
know
on-prem,
self-managed
or
through
the
red
hat
openshift
Apache
Kafka
capabilities,
we'll
have
a
new
Dev
UI
that
you
can
look
at
topics.
Look
at
messages
coming
through.
It
makes
it
very
easy
to
develop
with
Kafka.
K
We
also
have
Dev
services,
so
these
are
things
that
automatically
get
get
created
for
you.
So
if
you're
building
a
an
application
that
uses
the
elasticsearch,
we
will
fire
up
elasticsearch
for
you
in
developer
mode.
You
know
you
no
longer
have
to
set
anything
up
and
it
kind
of
wires
it
all
into
your
application.
For
you,
we
also
have
a
new
capabilities
within
the
infinisman
Dev
services.
So
again
that
will
fire
up
an
in-memory
data
grid
for
you
and
have
some
API
improvements.
K
There,
we've
also
added
in
in
the
latest
version
of
openshift
support
for
Direct
openid
Connect
providers,
things
like
apple
Facebook,
GitHub,
et
cetera,
Etc.
If
you're
building
an
application,
you
want
to
protect
your
apis
with
these
third-party
providers.
It's
super
simple
to
do
this
now
and
then.
Lastly,
new
support
for
service
binding
we've
had
support
in
there.
We've
added
support
for
workload,
projection
for
reactive
SQL
clients.
K
So
if
you're
building
a
reactive,
you
know
event
driven
non-blocking
application
with
quarkus
and
you're,
using
more
ADV
or
MySQL
or
any
of
the
reactive
versions
of
those
it
will
automatically
bind
those
Services.
Your
applications
to
those
services
is
through
kubernetes
service
finding.
So
if
you
move
to
the
next
slide,
two
more
runtimes
I
want
to
highlight
here
we
have
support
for
Apache
Tomcat.
This
is
red
Hat's
productized
version
of
Apache
Tomcat.
K
The
newest
version
in
openshift
4.12,
includes
updates
to
the
base
Tomcat
itself,
as
well
as
Apache
HTTP
server,
full
support
for
round
nine,
and
then
minor
updates
to
some
of
the
capabilities
that
have
always
been
in
that
product.
So
if
you're,
building
applications,
web
applications
or
servlet
applications
with
with
Tomcat
check
out
JBoss
web
server,
it's
built
into
openshift
and
you
have
full
support
for
that.
It
also
has
an
operator
as
well.
If
you
want
to
deploy
your
applications
and
manage
them
through
a
kubernetes
operator.
K
Next
Slide,
the
last
runtime
I
wanted
to
focus
on
now
is
is
Eclipse
adopting.
So
this
comes
out
of
the
Eclipse
adoption
project.
They
have
a
new
distribution
of
open
jdk.
So
if
you're
building
Java
apps
on
on
openshift,
you
have
a
lot
of
flexibility
here,
it's
one
of
the
most
popular,
if
not
the
most
popular
runtime,
for
Java
runtime
based
Java
runtime,
with
400
million
downloads
and
200
000
downloads
a
day
fully
supported
on
openshift
for
both
all
three
of
the
the
LTS
releases
of
java,
so
8,
11
and
17..
K
We
also
have
support
for
things
like
Mac
OS,
if
you're,
if
you're
doing
development
on
Mac
OS,
and
we
have
published
official
container
images
for
that
for
use
on
openshift
as
well
and
then.
Lastly,
a
GitHub
action
support
So
if
you're
building,
GitHub
actions
and
you're
building
Java
applications,
you'll
have
a
new
keyword
in
there
called
Tamarin,
which
is
the
name
of
the
Java
distribution
from
Eclipse
adoptium.
So
it
makes
it
super
simple
to
build.
You
know:
CI
pipelines
with
GitHub
actions
and
using
this
most
popular
and
fully
supported
runtime
on
openshift.
K
That's
it
for
runtime's
developer,
updates,
I'll,
pass
it
over
to
acoustav
for
platform
services.
L
Next
slide,
please
hi
everyone.
So,
as
you
know,
right
openshift
pipelines
is
based
on
a
Techron
which
is
a
powerful,
yet
flexible,
kubernetes
native,
open
source
framework
for
creating
continuous
integration
and
Delivery
Systems.
L
Last
time
we
made,
we
have
a
small
update
on
Techron,
which
is
that
tekton
has
graduated
as
a
3D
Foundation
project.
In
the
last
quarter,
coming
to
openshift
4.12,
we
are
releasing
pipelines,
1.9
version
and
in
pipelines,
1.9
version.
We
have
added
support
for
kept
on
resolvers
that
runs
in
a
kubernetes
cluster
alongside
Tech
on
pipelines,
and
it
resolves
requests
for
tasks
and
pipelines
from
remote
locations
as
resolvers.
We
support
built-in,
create
cluster
bundle
and
hub
Etc
that
attack
on
resolvers.
It
will
be
in
tech
review
and
it
is.
L
It
will
be
very
useful
for
our
customers,
many
of
whom
are
running
remote
Pipelines.
We
are
also
making
pipeline
as
code,
which
is
an
opinionated
CI
Solution
on
top
of
openshift
pipelines
as
generally
available
pipeline
as
print
as
code
has
been
in
Tech
preview.
For
quite
some
time.
We
have
continuously
gathered
feedback
from
our
customers,
who
are
actively
trying
out
path
in
their
environment
and
made
various
feature
announcements
some
of
the
enhancements
that
we
are
releasing
in
fact
alongside
making
it
as
generally
available
are.
L
We
are
adding
support
for
concurrency
limboot
in
the
repository
CRT.
Also,
we
are
adding
support
for
advanced
events,
batching
based
on
file
park
or
pull
request
on
merge
required
titles
on
your
git
provider,
also,
as
required
requested
by
various
other
customers
who
are
trying
out
back
now.
Pack
comes
with
better
electroline,
both
in
CLI,
as
well
as
in
different
kind
of
git
providers
such
as
GitHub.
So
we
display
various
pipelines
on
errors
and
add
a
small
speaker
into
the
GitHub
checks
or
as
a
VCS
comment
in
1.9.
L
We
are
also
adding
support
for
CSI
and
projected
volumes
to
be
used
as
workspaces
in
openshift
pipelines.
Another
key
feature
that
we
are
releasing
as
Tech
preview
is
that
we
understand
that
there
are
customer
pain
points
with
managing
Kickin
and
Kickin
back
CLI
with
Pac
now
becoming
GA.
We
want
to
provide
a
uniform
delivery
and
usage
method
for
both
the
clis,
so
we
are
consolidating
TK
and
TK
back
and
we
are
delivering
openshift
pipeline
CLI
or
OPC.
It
will
be
again
in
tech
review
mode.
L
A
couple
of
other
key
delivery
variables
in
1.9
include
openshift
pipelines
now
becoming
available
in
depth
sandbox
and
some
minority
Works
improvements
for
pythons
inside
their
country.
With
that,
I
now
have
to
go
to
Harriet
to
talk
about
operation
terms.
N
Thanks
Gustav
the
openshift
get
UPS
version,
one
seven
will
be
available
with
openshift
412,
and
this
will
include
our
Upstream
Argo
CD
version
2.6
so
similar
to
tecton
the
Argo
project.
Recently
graduated
from
the
cncf
very
exciting.
Some
highlights
from
our
latest
release.
We've
added
support
for
server-side
apply.
This
lets
you
update
or
think
a
partial
object
that
you're
opinionated
about,
and
it
can
be
useful
for
patching
resources
and
also
if
your
resource
doesn't
fit
in
an
annotation
team
has
also
added
a
tech
preview
to
support
managing
applications
across
namespaces.
N
The
Argo
CD
can
now
recognize
application
resources
that
have
been
applied
across
the
cluster,
not
just
within
the
namespace,
where
Argo
CD
has
been
deployed,
give
a
list
of
namespaces
to
manage
and
it
will
be
able
to
keep
them
all.
In
sync,
there
have
been
a
bunch
more
improvements
to
the
operator.
You
can
now
set
custom
node
selectors
in
the
Argo
CD
custom
resource.
N
Any
additional
selectors
that
are
added
will
be
merged
with
any
existing
ones,
such
as
run
on
infra
I've,
also
added
the
ability
for
admin
to
disable
the
link
to
ago
CD
from
the
openshift
console
more
information
about
the
features
I've
mentioned,
as
well
as
a
full
list
of
updates
and
fixes
can
be
found
in
the
release.
Notes
of
openshift
get
Ops
version,
1.7
I'll.
G
G
Thank
you,
Harriet
openshift
serverless
makes
your
open
shift
and
openshift,
plus
plus,
by
offering
better
Auto
scaling
and
networking
for
your
stateless
microservices
containers
and
functions
containers.
G
It
is
based
on
the
Upstream
project,
K
native
and
with
Ford
rutual.
We
would
be
updating
it
to
K
native
1.6.
We
are
very
excited
to
announce
that
serverless
functions
is
now
GA
with
quarkus
runtime
serverless
function
dramatically
increases
developer
velocity
by
providing
templates
for
jump
starting
your
app.
It
offers
local
developer
experience
through
CLI
and
IDE,
as
you
can
see
in
the
GIF,
and
it
also
offers
in-cluster
bill
for
your
production
needs.
G
A
Kafka
broker
and
Kafka
sync
are
also
GA
for
all
your
production
needs
around
creating
event-driven
applications,
Kafka
broker,
maximizes
Kafka
performance
and
reduces
Network
hops.
Another
GA
shout
out
is
for
init
containers
and
persistent
volume
claims
for
implementing
any
initialization
logic
and
using
any
permanent
data
storage
that
you
need
for
your
creating
applications.
G
Under
our
security
promise.
We
have
introduced
mtls
natively
in
Canada
as
a
tax
review
feature
and
the
last
we
have
upgraded
our
serverless
logic
developer
preview,
which
offers
workflow
capabilities
for
managing
failures,
retries,
parallelizations
and
service
Integrations.
Please
see
our
release
notes
for
the
full
list
of
features
and
we
would
love
to
hear
any
feedback
up
next
slide.
Please,
service
mesh
helps
you
create
secure,
reliable
microservices
with
enforced
TLS,
encryption,
zero
trust,
traffic
policies
and
instant
visibility
with
out-of-the-box
metrics
and
traces.
G
We
recently
introduced
openshift
service
openshift
service
mesh
2.3,
which
updates
is
tier,
2
1.14.
This
release
brings
GA
support
for
Gateway
injection,
which
allows
istio
gateways
to
be
deployed,
managed
and
upgraded
independently
of
the
istio
control
plane.
Some
notable
check
preview
features
in
this
release
are
service
mesh
console
plugin.
That
brings
the
kiali
graph
into
the
openshift
console
and
weave
service
mesh
data
into
the
workload
and
service
pages.
We
have
also
introduced
a
cluster-wide
installation
option
that
is
optimized
for
large
meshes
within
a
single
cluster
and
to
align
with
Upstream
steel.
G
This
will
become
the
default
installation
option
in
our
future
releases.
We
are
continuing
to
evolve.
Istio
support
for
kubernetes
Gateway
API
with
kiali
support,
as
we
have
added
in
this
release
and
finally,
this
release
brings
support
for
federating
service
meshes
across
clusters
on
Azure
Red
Hat
openshift
Auto
I
will
now
hand
off
to
Julian.
Thank
you,
foreign.
D
Next
slide,
yes,
so
let's
talk
about
installer
flexibility,
the
as
you've
noted.
We
currently
have
a
bunch
of
different
installation
methods
and
supported
providers.
Then
412
there
are
four
specifically
the
first
one
is
around
full
stack
Automation
and
what
historically
folks
remember
as
installer
provision
infrastructure
or
the
installer
controls
all
areas
of
the
installation,
including
the
infrastructure,
provisioning,
and
it
provides
opinionated
best
practices
on
how
you
deploy
openshift.
D
The
second
method
is
the
pre-existing
infrastructure
deployment
method
which
uses
which
we
call
user
provision
infrastructure
where
you're
responsible
for
provisioning
and
managing
your
own
infrastructure
to
allow
you
the
greater
customization
and
operational
flexibility
and
control.
The
third
one
is
interactive.
Connected
experience,
synthesis
around
what
we
call
the
assisted
installer,
and
this
provides
you
a
web-based
experience
for
creating
your
clusters
and
then.
Lastly,
in
the
spotlight
section,
we've
highlighted
the
agent-based
installer,
which
provides
a
streamlined
experience
for
deploying
openshift
and
a
fully
disconnected
or
air-gapped
environment.
D
We
also
have
shared
VPC
support
with
gcp,
with
the
IPI
installer
we've
added
support
for
R1
azir,
as
well
as
expanding
to
enable
Zone
Awareness
on
openshift
fees.
Here
we're
continuing
to
make
progress
on
cluster
API,
which
will
become
our
standard
way
for
provisioning,
upgrading
and
operating
multiple
kubernetes
clusters,
and
we
still
have
work
to
do
on
this
front
and
in
the
meantime,
the
machine
API
will
continue
to
be
used.
But
let's
dig
in
and
take
a
look
at
some
of
the
exciting
new
features.
Next
slide.
D
So
openshift
in
vsphere
is
Zone,
aware
so
beginning
in
openshift
412
we're
introducing
the
ability
to
install
openshift,
sonal
clusters
and
fees
here
using
the
installer
provision,
infrastructure
method-
and
this
is
a
tech
preview
feature,
and
this
leverages
a
vcentered
text
to
associate
those
tags
with
openshift
regions
and
open
YouTube
zones,
and
so
now
customers
can
associate
vcentered
data
centers
with
openshift
regions
and
then
similarly
vcentered
clusters
with
open
shift
zones,
and
you
can
actually
see
that
in
this
diagram
and
what
this
allows
you
to
do
is
to
start
to
have
better
awareness
of
having
separate
failure,
domains
and
create
a
better
higher
availability
for
your
deployments
next
slide.
D
Some
of
the
notable
changes
that
we
have
in
up
in
shift
12
is
that
we're
going
to
remove
support
for
VMware
vsphere
6.7
U2,
as
well
as
7.0
U1,
is
being
deprecated
and
then,
similarly,
on
the
virtual
hardware
version
13..
This
has
been
removed.
D
Next
slide,
let's
talk
about
flexible
openshift
installation
and
so
for
some
time
now,
there's
been
an
increased
desire
to
move
away
from
a
one-size-fits-all
cluster
installation
and
be
more
flexible
on
how
a
cluster
gets
built
out
of
the
box,
and
this
is
to
reduce
security
exposure.
D
So
in
conjunction
with
all
of
these
efforts
to
make
the
installation
more
flexible,
we're
continuing
our
efforts
to
make
openshift
more
composable,
but
by
providing
a
mechanism
for
you
to
exclude
one
or
more
optional
components
to
during
the
installation,
and
this
in
turn
will
basically
determine
what
payload
components
are
installed
and
what
doesn't
get
installed
in
the
cluster.
So
in
the
previous
release,
we
already
made
it
possible
to
disable
the
marketplace
operator,
the
samples
operator
and
the
bare
metal
operator,
and
now
in
412,
we've
expanded.
D
That
list
to
now
include
console
operator,
the
insights
operator,
the
storage
operator,
CSI
snapshot,
controller
operator,
and
so
you
can
disable
these
settings
by
making
some
changes
within
the
install
config
yaml.
That's
noted
in
this
particular
slide
and
then
after
you've
disabled
this,
you
also
have
the
ability
to
re-enable
them
after
a
cluster
is
installed.
Should
you
choose
next
slide,
so
in
412
we're
now
promoting
the
deployment
of
openshift
on
IBM
Cloud
to
GA,
and
what
this
means
is.
D
You
can
now
deploy
private
clusters
and
IBM
virtual
private
clouds
using
the
installer
provision,
infrastructure
or
full
stack
automation,
method,
and
what
this
means
is
that
you,
you
can
now
create
private
or
disconnected
deployments
as
well,
using
openshift
in
an
existing
VPC.
A
couple
things
to
note
is
IBM.
Cloud
still
only
supports
ipv4,
so
dual
stack
or
IPv6
environments
are
not
yet
possible.
D
Next
slide.
Let's
talk
a
little
bit
about
what
we're
going
to
be
doing
on
well,
what
we're
doing
in
412
for
gcp
and
so
furthering
our
commitment
towards
the
open,
hybrid
Cloud.
You
can
now
use
your
committed
gcp,
spend
towards
purchasing
and
running
Red
Hat
offerings
directly
through
the
gcp
marketplace,
and
in
addition
to
that,
we're
delighted
to
announce
that
in
412,
customers
such
as
yourselves
will
now
be
able
to
deploy
gcp
using
your
existing
shared
VPC
or
xpn
configurations
using
the
IPI
workflow.
Please
note
that
this
is
in
Tech
preview.
D
We've
had
a
lot
of
interest
in
this,
and
a
couple
things
to
note
is
with
this
particular
method.
You
still
need
to
pre-create
some
of
your
resources,
such
as
networks,
stuff,
Nets,
firewall
rules
and
DNS
configurations,
and,
lastly,
you're
also
now
able
to
take
advantage
of
using
a
gcp
instance
that
has
a
surface
account
bound
to
it.
Instead
of
downloading
your
service
account
keys
for
openshift
deployments
on
gcp,
so
let
me
hand
this
off
to
Heather
to
talk
about
transparent,
Network,
proxy
installs.
B
Prior
to
openshift
for
12
customers
who
wanted
to
install
clusters
with
transparent,
Network
level
proxies
needed
to
Wrangle
with
openshift
install
create
manifest
unless
their
cluster
could
get
far
enough
to
make
it
to
day
two
changes
via
the
Clusters
kubernetes
kubernetes
API
folks,
periodically
trip
over
this
and
end
up
complaining
to
the
installer
about
surprising,
additional
trust
bundle
handling,
providing
now
we're
providing
more
convenient
configuration
with
more
discoverable
documentation
which
will
reduce
the
customer
pain
dripping
over
the
existing
ux
wrinkle
and
will
also
reduce
installer
noise.
Responding
to
customer
complaints.
B
O
Thanks
Heather
I
get
to
talk
to
you
all
about
cluster
infrastructure
now,
so
we
continue
to
do
loads
of
work
on
the
provider
side
of
things,
but
I
guess.
The
big
ticket
item
for
this
release
is
what
we're
nicely
calling
managed
control
planes,
and
why
do
you
come?
Why
do
you
care
about
that?
Well,
you
know
if
you
wanted
to
scale
your
control
plane.
Maybe
you
wanted
a
larger
configuration
machines.
There's
a
process
there.
Now
that
lets
you
do
it,
but
maybe
it's
a
little
bit
manual.
O
Maybe
it's
not
as
easy
as
you
would
like
it
to
be.
So
here
we
come
with
managed
control
planes,
we've
automatically
automated
the
task
for
you
and
you
can
scale
your
systems
up
or
indeed
down
much
more
easily,
and
this
is
useful
not
only
for
moving
to
bigger
systems,
but
you
can
use
it
to
do
replacement
of
a
specific
control,
plane
machine.
You
know
if
it's
acting
up
or
something
like
that.
This
is
particularly
useful
in
today's
world
of
managed
services.
O
So
if
you're,
you
know,
if
you
imagine
that
you're
running
a
cloud
with
loads
of
customers
on
that,
could
be
internal
or
external,
then
you
want
those
tasks
to
be
really
easy,
so
this
is
kind
of
one
thing
that
we
see
being
much
much
needed
in
the
industry
as
it
were
next
slide.
Please
next
up
is
systems
enablements,
which
we've
renamed
the
multi-architecture
compute
group
for
too-
and
you
know
the
a
few
things
are
happening
here.
O
The
first
one
is
multi-architecture
compute,
which
you
may
have
heard
us
refer
to
as
heterogeneous
compute
in
the
past.
That's
going
to
stay
in
Tech
preview
for
now
only
on
Azure,
but
don't
worry,
we're
aware
on
that
and
you'll
see
more
on
it,
I
guess
and
and
future
releases.
If
you
do
want
to
play
with
it,
there
is
a
multi-arc
payload
there
and
you
can
kind
of
force.
Pick
pay
false
upgrades.
O
Excuse
me,
but
it's
not
something
that's
supported,
but
you
can
go
and
try
it
out
on
the
arm.
Side.
G's
already
mentioned
this,
but
repeat
your
message
to
get
it
set
set
at
home.
We
now
offer
ocp
on
arm
honors
here
with
our
automated
installation
or
IPI
installation
method,
again
watch
out
for
UPA
coming
soon
on
the
power
and
Z
side.
Just
one
thing,
I'm
going
to
mention
here
do
take
a
look
at
the
release
notes.
O
There
are
some
deprecations
in
there
for
the
older
systems
that
there
was
too
many
to
kind
of
list
on
the
slide
here.
So
just
take
a
look
at
that
they're
not
going
to
win
now,
but
they
will
in
the
future.
So
this
is
a
chance.
You
know
to
work
with
your
customers.
Give
them
an
advanced
heads
up.
The
things
are
happening
there
and
next
I
would
love
to
hear
Mark
Russell
tell
us
some
interesting
facts
about.
What's
next
for
core
OS.
P
P
Manage
your
operating
updates
and
configuration:
that's
what
we
aim
to
find
out
making
OS
container
images
that
are
bootable.
These
aren't,
like
your
regular
application
container
images
that
you
would
use
with
a
container
engine
like
podman
or
cryo.
These
are
operating
OS
containers
that
contain
kernel
packages
and
everything
needed
to
update
on
disk
content
for
physical
and
virtual
machines,
but
because
they're,
an
oci
format.
Every
tool
and
technique
that
you
know
from
App
containers
can
now
be
applied
to
these
bootable
host
images,
build
inspect
test
and
mirror
them
the
same
way.
P
You
do
with
any
other
container
in
412
we're
officially
supporting
this
usage
for
support,
delivered
Rel
hotfix
packages.
However,
it
can
be
used
as
a
Dev
developer
preview
for
any
customization.
You
could
copy
and
configuration
files
without
using
machine,
configs,
install
third-party
agents
or
extra
packages.
The
sky's
the
limit
next
slide.
P
So
here's
the
simplified
hotfix
example-
maybe
it'll
just
clarify
things
a
little
bit
I'll
be
quick.
Here
we
have
a
Docker
file,
also
known
as
a
container
file.
That
essentially
says,
take
the
latest
Rel
coros
image
poppy
in
a
couple
of
hotfix
RPM
packages
inside
this
container
image
and
install
them
overriding
the
versions
that
are
built
into
the
base
image
after
you
build.
Then
you
build
it
with
podman
or
another
tool,
push
it
to
a
registry
and
from
there
you
can
now
apply
that
customized
image,
one
or
more
of
your
pools
in
the
cluster
thanks.
R
Hey
thank
you
Mark,
so
starting
we
actually
started
in
for
10
graduating
to
gain
for
12..
We
are
introducing
the
deployment
of
openshift
on
top
of
openstack
this
year,
which
basically
means
that
we
can
leverage
the
remote
worker
of
openstack,
which
is
mostly
targeting
the
nearest
aggregation
also
from
the
centralized
site
up
to
the
big
obligation
layers,
which
is
most
of
the
tier
one
aggregation
levels.
R
We
can
actually
deploy
openshift
clusters
as
a
whole,
Masters
and
Walkers,
and
we
can
also
spin
workers
across
different
openstack
azs
invented
the
limitation
that
we
have
or
the
receiving
networking.
So
basically,
we
need
to
keep
it
under
100
millisecond
round
trip,
and
this
is
something
which
is
also
involving
opening
openstack
16.2.
So
we
will
need
to
adhere
to
that.
Also
next
slide.
Please.
R
Building
on
top
of
that
info,
12
we're
introducing
in
Dev
preview
the
ability
to
actually
fully
stretch
the
operative
cluster
So,
based
on
the
same
topology
that
it
doesn't
slide
before.
We
can
actually
have
fully
stretched
control
planes
on
different
daisies,
meaning
we
can
have
the
same
clusters
as
we
had
before,
deploy
them
as
a
monolithic
cluster
perazield.
We
can
even
spend
them
across
multiple
azs.
R
We
are
targeting
to
have
it
in
the
preview
in
413
and
fully
G8
in
440..
Also,
we
need
to
have
openstack
16.2
is
the
Baseline
and
build
on
top
of
that
next
slide.
Please.
R
A
bit
of
a
detour,
but
we
are
introducing
a
new
deployment
method
for
openstack
kind
of
you
can
think
of
it
of
something
within
in
the
lines
of
people
who
reimagined
we
actually
using
openshift
as
the
infrastructure
to
drive
the
openstack
deployment
and
host
the
control
plane.
We
are
leveraging
cnv,
so
openshift
virtualization
to
host
the
control
plane
VMS
for
openstack.
We
are
using
metal
3
as
the
bare
metal
inventory,
so
think
of
it.
As
some
of
you
have
some
openstack
knowledge
as
a
pre-provision
deployment
of
openstack.
R
But
now
we
are
using
openshift
as
the
go
to
infrastructure
which
actually
gives
our
customers
the
ability
to
grab
the
rope
in
both
ends,
so
they
can
have
on
the
same
common
infrastructure.
The
next-gen
fully
container
has
all
closed
running
under
metal,
and
they
can
also
have
the
traditional
vnfs
or
traditional
virtualization
workloads
running
basically
on
dedicated
nodes,
which
are
the
openstack
notes.
R
By
the
way
those
nodes
are
transient,
so
I
can
basically
scale
up
or
scale
down
the
permitted
inventory
and
shift
those
nodes
around
if
I
need
to
run,
for
example,
more
openshift
workload
or
openstack
load
workload
to
support
it.
And,
yes,
we
are
supporting
sandwiches.
So
if
anyone
wants
to
run
openshift
on
top
of
openstack
on
top
of
openshift,
we
can
accommodate
that
and
with
that
I
will
end
in
order
to
share
for
the
control
plane
updates.
Thank.
A
You
again
and
everybody
think
I'm
going
to
cover
a
couple
of
updates
on
the
control
plane
in
agoro
is
not
here
so
I'm
covering
on
behalf
of
him.
The
first
one
is
really
the
tech
preview
for
a
new
CLI
manager,
which
is
clear
crew
is
the
Upstream
name
for
it.
So
we're
continuing
that
with
crew,
you
know
being
added
to
OC,
you
can
discard
OC
plugins,
you
can
install
those,
and
then
you
can
also
keep
updated.
A
So
this
makes
we'll
see
with
crew
much
more
pluggable,
and
therefore
you
can
kind
of
install
your
favorite
CF
supported.
Plugins
use
it
that
way.
S
A
It
I
detect
preview.
The
other
thing
obviously
really
is
two
exciting
new
things.
At
the
core
of
the
platform,
one
is
Seattle.
This
is
an
oci
runtime
which
is
written
in
C
and
therefore
it
offers
a
faster
and
lower
memory.
Footprint
than
run.
P
A
And
with
c
groups
V2,
you
know,
which
is
the
next
Generation
control
plane
in
the
Linux
kernel.
We
are
introducing
that
also
as
well
as
Tech
preview.
You
know
this
is
c
groups
V2
with
that
you
get
better
noticeability
for
out
of
memory
pressure
scenarios
you
get
better
page
caching
and
back
counting
and
the
current
implementation
is
a
one-on-one
with
V1.
A
J
We
already
mentioned
Advanced
cluster
security
cloud
service,
but
there
are
also
a
number
of
we
understand
that
one
of
the
key
areas
of
the
platform
is
to
make
it
simpler
for
users
to
prioritize
issues,
and
for
this
we
have
included
a
new
top
level
dashboard
that
is
specifically
designed
for
that.
We
have
also
made
some
adjustments
in
the
network
graph.
J
There
are
two
ready
to
use
policies
that
are
very
useful
for
at
means
like,
for
instance,
checking
privilege
escalations
and
also
checking
whether
we
have
externally
exposed
service.
Acs
will
use
password
SQL
as
a
stack-end
database
in
the
future.
This
will
replace
the
current
Rock
CB
that
we
use
today,
and
we
do
this,
because
this
change
will
bring
benefits
such
as
improved
performance.
Is
your
backup
and
restore
and
also
disaster
recovery.
J
Now
we
also
provide
a
way
to
shift
lab
your
network
policy
creation.
This
is
now
in
Tech
preview
and
it
is
based
on
application
yaml
manifest,
so
you
can
use
it
to
develop
Network
policies
as
part
of
your
CI
CD
pipeline
before
deploying
applications
on
your
cluster
and
the
last
part
will
be
vulnerability
management.
We
have
included
support
for
rel9,
but
now
we
are
also
able
to
alert
if
you
are
using
any
component
in
your
Docker
file
that
contains
CBDs.
J
If
we
go
to
the
next
slide,
we'll
talk
about
compliance
operator
compliance
operator
help
is
an
operator
that
helps
you
stay
compliance
against
security
standards,
identify
gaps
and
provide
remediations.
Now
we
have
also
provided
better
control
of
the
resources
allocated
for
this.
We
do
this
by
customizing
CPU
and
memory
resources
first
scan,
but
also
watching
the
resources
in
the
given
namespace.
J
We
also
now
are
able
to
prioritize
which
parts
we
want
to
scan
first
in
the
workloads,
and
we
are
even
gonna-
get
more
accurate
results
by
evaluating
the
default
configuration
values
against
the
compliance
rule.
We
have
expanded,
also
support
of
PCI
DSS
profiles,
and
now
those
will
also
be
available
in
IBM
power.
Architect,
we
go
to
the
next
slide.
I
will
be
happy
to
present
the
security
profile
operator,
which
is
going
GA
soon.
J
Security
profile
operator
helps
admins
to
use
C,
Linux
and
second
effectively.
We
know
one
of
the
major
issues
that
exist
with
both
of
these
is
how
complex
is
to
actually
create
your
profiles.
This
is
the
solution
for
that.
It
helps
you
to
create
your
profile
and
it
does
it
by
recording
what
your
application
needs
and
creates
that
profile
for
it.
It
also
helps
you
manage
the
profiles
across
the
nodes
and
namespaces,
and
it's
also
able
to
validate
the.
For
instance,
a
node
doesn't
support
Second,
and
in
that
case
it
doesn't
apply
it.
T
Hey
thank
you.
Maria
fantastic
awesome
experience
to
be
here
at
2023,
launching
with
some
great
releases
in
the
management
space.
We
continue
to
push
further
into
the
advanced
and,
as
you'll
see
on
this
slide.
Governance
kicks
it
off
as
one
of
our
top
features,
with
a
with
a
set
of
a
framework
and
policy
engines
that
our
customers
continue
to
come
back.
Asking
for
more
that's
awesome.
One
of
the
key
features
they
want
is
to
order
their
the
execution
of
their
policies.
T
T
Next,
as
you
look
at
the
automatic
reconciliation
we're
now
seeking
the
secrets
to
the
manage
Hub
via
templating,
the
managed
environments
now
can
take
an
automatic
approach
for
the
reconciliation
of
resources
from
policy
templating,
whereas
before
that
was
a
manual
sync
and
next
our
policy
generator
now
references
the
remote
https
style,
customized
configs.
This
gives
you
super
flexibility
with
pushing
policy
directly
from
the
source.
T
Moving
on
to
the
next
slide,
we'll
talk
a
little
bit
about
our
Better
Together
strategy
and
how
we're
ensuring
that
anything
that
you
create
across
the
openshift
platform
plus
portfolio
works
better
together
with
ACM
in
particular,
you'll
see
enhancements
across
ansible
right
here,
making
sure
you
have
the
best
experience
with
policy
violations
providing
additional
context
into
the
ansible
post
hook,
flows
we're
also
including
ansible
workflows
for
both
our
cluster
and
application.
Lifecycle.
T
Events
giving
you
broader
range
of
flexibility
in
which
you
choose
to
automate
on
the
ansible
side
or
even
including
labels
and
tags,
which
is
a
very
highly
requested
customer
feature
to
give
you
more
information.
As
you
pump
out
those
ansible
automations,
we
are
a
few
days
away
from
the
ACM
and
MCE
Community
operators,
I'm
told
it
should
be
the
end
of
this
week.
That's
been
a
long
awaited
feature
set
that
gives
our
customers
easier
traction
when
adopting
new
features
in
the
new
release,
streams.
T
T
Lastly,
again
in
this
Better
Together
theme
we're
talking
about
multi-cluster
networking
with
sub
Miner
enhancements.
There
include
the
automated
configuration
on
Arrow
and
Rosa
and
highly
requested
features
around
disconnected
and
air-gapped
environments,
so
ready
to
go
with
OV
and
sdn.
So,
as
Mark
Curry
said
we're
all
set
for
you
on
412..
T
Let's
kick
it
over
to
slide
63.
we'll
talk
about
management
at
the
edge
for
just
a
minute.
This
slide
probably
looks
familiar,
and
it
should
because
we
continue
to
hammer
away
on
this
highly
sought
after
capability
for
for
full
scope
of
management
from
the
center
out
to
the
edge
ACM.
Can
now
managed
3500
single
node
openshifts?
We
appreciate
the
team
and
the
ACM
performance
for
what
they're
doing
with
testing
the
Cycles
out
there.
A
du
is
a
distributed
unit.
T
So,
if
you're
familiar
in
the
telecon
ranch
space
you'll
understand,
we
really
think
that
is
a
single
and
open
shift.
That's
deploying
Edge
capabilities
out
there
where
customers
need
to
see
it,
use
it
to
improve
their
experience.
Of
course,
that's
an
IPv6
connected
and
disconnected
scenarios.
We
understand
that
that
continues
to
grow
as
a
very
highly
sought
after
area
of
openshift
capability.
T
I
also
want
to
highlight
our
search
Squad.
The
version
2
called
Odyssey
is
now
GA
and
ready
for
high
skill
environments.
I
love
the
fact
that
we're
bringing
in
the
search
resource
details
bringing
that
more
up
to
speed
with
the
way
it
looks
in
the
openshift
console
and
expect
further
enhancements
to
those
search
results
in
the
coming
releases.
T
You'll
also
remember
that
we
do
have
user
configurable,
Dynamics
collection
for
your
metrics.
That's
a
great
feature
for
management
out
at
the
edge
we'll
slide
forward
to
the
next
one.
Here,
man,
this
is
getting
great
topology,
aware:
lifecycle
manager,
talama.
You
may
remember
that
used
to
be
called
Taylor,
this
operator
Works
alongside
ocp-412
and
ACM,
to
ensure
that
you
have
the
ability
to
group
your
assets
to
group
your
clusters
and
roll
out
the
policies
in
a
phased
approach.
T
You
want
to
ensure
that
this
that
the
success
rate
of
that
rollout
is
moving
at
a
pace
that
you
expect
it
to
so
specifically,
your
features
here
I've
got
things
like
specifically
for
single
node,
openshift,
the
creation
of
backups
and
the
restore
scripts,
so
that
on
a
failure,
you
can
actually
restore
that
snow
to
the
to
the
Precast
image
before
the
upgrade
kicked
off.
T
As
you
know,
a
single
node
is
a
single
point
of
failure,
so
having
a
feature
like
that
is
awesome
as
you're
rolling
out
across
your
provider
Network
next
we'll
slide
into
the
last
one
here,
which
is
around
the
backup
solutions
for
openshift
This
falls
under
our
business
continuity
theme
and
as
we
can
see
here,
the
oadp
1.1
version
now
has
a
native
backup
utility
with
412..
What's
cool
about
this?
Is
you
can
continue
to
use
the
existing
third-party
backup
applications
with
our
isvs?
An
awesome
list
that
you
can
see
there
on
the
screen?
T
But
now
we
have
a
native
utility
for
an
ocp
cluster
backup
that
could
be
sent
directly
to
the
S3,
so
these
native
capabilities
should
make
it
easier
for
you
to
use
any
native
snapshot
data
Looper
capability
using
all
the
SD
based
object,
storage
outside
of
the
cluster.
This
will
work
for
any
CSI
snapshot,
supported
storage
and
you
can
use
the
CLI
to
schedule
backups
and
restores
from
here
from
from
schedule.
A
UI
is
being
worked
on
for
later
releases,
so
we'd
love.
T
U
So
let's
take
a
look
now
what's
new
in
this
and
me
and
Jamie
is
gonna,
take
spider.
So
next
slide,
please
So.
Within
the
the
monitoring
field.
We
are
now
allowing
us
users
to
specify
topology,
spread
constraints
for
the
Prometheus
alert
and
Thanos
Rule,
and
also
to
improve
the
consistency
of
the
Prometheus
adapter.
V
So,
under
the
data
collection
pillar
for
observability,
with
the
release
of
logging
5.6,
this
will
be
our
GA
of
vector
as
an
alternate
collector
to
fluent
d.
Some
of
the
advantages
of
this
are
vector
is
very
scalable.
It's
vendor
neutral.
V
U
And
continue
to
the
the
storage
parts
we
in
openshift
412.
We
have
focused
a
lot
of
the
feature
and
version
updates
of
the
total
of
the
monitoring
stack.
So
keeping
these
components
and
dependencies
up
to
date
makes
us
ensure
that
monitoring
stack
is
running
optimally
and
also
reducing
the
risk
of
encountering
errors
and
such.
V
And
for
for
data
storage
in
logging,
five
six,
we
are
offering
stream
based
retention
for
Loki.
So
rather
than
having
Loki
be
a
global
retention,
you
can
now
divide
that
retention
and
enable
that
configuration
per
tenant
and
per
stream
next
slide.
U
And
here
we
have
one
of
our
Tech
previews
and
that
is
allowed
to
allow
admins
to
use,
create
the
new
alerting
rules
based
on
the
platform
metrics,
and
by
providing
this
capability.
You
can
now
set
up
rules
for
certain
conditions
and
thresholds
and
performance
related
issues,
and
this
will
allow
the
admins
to
proactively
address
any
issues
that
may
arise
within
the
platform
and
then,
of
course,
ensure
the
performance.
V
U
And
here
we
come
in
to
the
visualization
parts
and
we
have
now
an
improved
ux
experience
in
the
web
console
and
we're
also
going
to
show
some
the
next
slide
a
demo
of
that,
and
we
also
have
support
for
other
communities
negative
matches.
And
what
that
means
is
that
you
can
silence
alerts
now
directly
in
the
openshift
web.
Console.
V
And
as
Roger
was
saying,
we'll
have
a
visualization
in
the
next
slide,
but
with
this
logging
five
six
release,
you
can
now
explore
the
logs
under
the
developer,
console
so
we'll
have
an
aggregated
logs
tab
where
you'll
be
able
to
search,
filter
and
visualize
those
logs
by
severity
and
identify
issues
for
your
cluster
a
lot
more
quickly.
We'll
have
some
predefined
filters.
That'll
help
a
lot
for
like
pods
and
containers,
for
example,
so
that
will
help
the
searching
of
the
logs
a
lot
more
quickly.
Next
slide,
foreign.
V
U
So
the
last
on
the
fifth
pillar
that
we
hope
now
going
forward
we're
going
to
feed
you
more
consistent
experience
is
the
data
analytics
part
and
here
from
a
monitoring
perspective.
The
web
console
users
cannot
use
runbook
URLs
via
the
alerting
UI,
and
that
means
that
if
an
alert
includes
this
URL,
you
will
be
able
to
access
the
wrong
book
information
by
clicking
on
the
alert.
And
then
you
can
address
the
issue.
V
And
also
to
enhance
data
analytics
and
logging,
we're,
including
another
customer
requested
feature
which
is
adding
the
openshift
cluster
ID
to
log
records
that
really
helps
you
analyze
problems
better,
because
you
can
uniquely
identify
each
of
the
Clusters
in
your
aggregated
logs.
So
you
can
see
at
a
glance
a
more
clear
picture
of
what's
Happening
and
thank
you.
This
is
for
all
for
observability
and
I'll
hand
off
to
you.
Tomas.
W
Hi
everybody
insights,
continuous,
providing
actionable
recommendations
based
on
best
practice
and
red
Hat's
own
experience
of
managing
openshift
clouds
with
the
release
of
open
g4.12.
W
We
are
expanding
this
into
the
workload
management
with
a
new
capability
that
will
provide
you,
recommendations
on
things
like
unset,
CPU
limits
or
memory
configuration
for
your
workloads
next
feature
that
we're
releasing
with
this
openshift
release
is
the
ability
to
display
most
critical
recommendations
as
in
cluster
alerts,
so
that
you
would
have
them
handy
in
the
openshift
web
console
whenever
you
utilize
the
admin,
and
with
that
how
why
don't
you
tell
us
a
little
bit
more
about
insights,
cost
management.
X
You
can
add
all
your
Cloud
accounts
to
cost
management,
and
then
we
will
create
reports
for
you,
we'll
gather
all
the
information
and
tell
you
this
application
is
costing
you
this
much
for
this
project
or
this
cluster,
or
this
Cloud
account.
You
can
see
conservative
views
for
all
of
that,
and
the
use
case
we're
targeting,
is
to
give
you
the
fully
loaded
cost
for
your
application
project,
whatever,
including
not
not
just
the
cost
of
the
workload
itself,
but
also
the
cost
of
third-party
services
or
the
cost
of
Rosa
the
cost
of
Arrow.
X
Because
of
networking
storage.
All
of
that
we
are
even
including
now
the
cost
of
the
analocated
capacity,
because
someone
has
to
pay
for
that
or
the
cost
of
the
control
plane.
That's
that's
a
real
cause
for
you,
you
and
we
are
improving.
Also
the
case
of
AWS,
like
sensible,
more
sensible
defaults
in
case
you
have
savings
plans
which
is
a
reality
for
many
of
our
customers
and
I.
Think
it's
it's
very
exciting.
What
we
have
done
in
the
past
month
and
dipty
is
your
turn.
Thank
you.
Q
Thank
you.
Thank
you.
Everyone
Hi
everyone,
so
we're
constantly
trying
to
evolve
to
make
openshift
Network,
better
performant
and
secure.
Now
we
already
heard
Mark
talk
about
some
of
the
flagship
features
for
this
release
in
networking
area,
along
with
that
we're
very
happy
to
introduce
a
technical
preview
of
the
Ingress,
not
firewall
operator.
Now.
This
is
basically
designed
to
keep
your
cluster
away
from
external
threats
by
monitoring
and
controlling
your
incoming
traffic.
Q
Given
that
all
the
deployments
are
varied
with
security
requirements,
we
have
looked
to
provide
a
supported
way
to
deploy
firewall
rules
and
openshift
nodes
so
that
the
customer
can
write
the
firewall
rule
set
that
fits
their
needs
and
update
them
as
network
configuration
evolves
through
this
operator,
and
this
is
implemented
within
XDP
edpf
for
high
performance,
so
basically
use
a
custom
resource
to
configure
and
deploy
your
rules.
We
have
a
web
hook
that
basically
validates
your
configuration
and
we
have
xtp
eppf.
Q
You
know
which
kind
of
looks
into
your
rules
passes
the
packets
and
you
know,
takes
further
actions
in
this
release.
We
support
configuring,
stateless
policies
and
we're
looking
to
evolve
this
to
support
of
stateful
policy
in
the
upcoming
release.
Next
slide,
please
so
we
have
undertaken
a
lot
of
Ingress
enhancements
in
this
release
based
on
all
customer
needs
and
asks.
Now
we
have
the
ability
to
tune
your
TTL,
the
time
to
live
duration
for
both
successful
and
unsuccessful
DNS
queries.
You
can
reduce
load
on
your
DNS
infrastructure
we've.
Q
All
we've
also
had
multiple
requests
from
customers
who
want
to
deploy.
You
know
in
a
DNS
Zone,
very
different
from
the
cluster
DNS
Zone
and
to
address
this
we've
provided
the
ability
to
completely
disable
DNS
Management
on
your
Ingress
controller.
So
now
we
have
few
States
managed
and
unmanaged
and
when
it
is
unmanaged,
the
responsibility
falls
in
the
cluster
admin
and
we
also
allow
seamless
transitions
between
these
two
management
policies.
Q
Along
with
it.
You
know
we
have
Ingress
controller
Auto
scaling,
which
is
Tech
preview
for
412..
Now
you
can
use
the
custom
metrics
Auto
scalar
operator
to
dynamically
scale,
the
default
Ingress
controller
based
on
metrics
that
that
you
know
are
deployed
within
your
cluster,
such
as
number
of
workloads
available
Etc,
both
the
cluster
metrics
Auto
scaler
and
the
Ingress
Control.
Auto
scaling
are
technical
preview
for
the
interest
of
time.
We
do
not
have
much
to
talk
about.
You
know.
All
of
the
Ingress
enhancements
that
we
have
undertaken.
Q
I
would
kindly
request
you
to
take
a
look
at
the
release,
notes
for
more
details
next
slide,
please
and
over
to
Peter
for
virtualization
updates.
Thank
you.
Thank.
S
You
we've
got
a
lot
going
on
when
you
want
to
run
virtual
machines
on
the
openshift
platform.
A
lot
of
them
are
Advanced
features
that
you
may
be
used
to
on
existing
platforms,
such
as
VMware
VM,
export
being
able
to
move
an
actual,
fully
formed
VM
between
different
clusters,
but
we've
also
paid
very
close
attention
to
improving
the
administration
experience
of
virtual
machines
on
a
kubernetes
cluster.
S
So
we've
made
great
improvements
in
dashboards
with
a
lot
of
ux
research
from
our
team
and
feedback
from
customers,
and
now
we
actually
have
more
detailed
statistics
about
what's
going
on
with
live
migrations.
What
are
the
histories
look
like
and
the
ability
to
actually
bring
in
connecting
the
cluster
via
SSH
completely
through
the
API?
S
As
far
as
observability
goes
again,
we
want
to
focus
on
not
just
normal
workflow
operations,
but
give
you
enough
detail
to
actually
solve
problems
on
your
own
without
having
to
be
a
kubernetes
expert,
so
how
many
migrations
are
actually
in
progress
right
now?
How
many
are
scheduled,
rvms
migrating
too
frequently?
A
lot
of
that
information
should
be
exposed
to
the
administrator
in
a
very
easy
way.
Also,
we
wanted
to
make
the
experience
of
just
a
normal
openshift
upgrade
much
more
much
less
noisy.
S
So
there
were
a
couple
of
false
alerts
that
we've
actually
toned
that
down
and
the
issues
that
actually
do
come
up
will
be
things
that
you
probably
do
need
to
pay
attention
to.
S
Since
we're
running
KVM
VMS.
We
want
to
make
sure
that
both
for
Rel,
Rel,
I
think
9.1
is
the
fully
supported,
guest
and
Windows
support
for
things
that
we
can
do
today.
Ueif
UEFA
boot,
secure
Boot
and
the
virtual
TPM
is
Tech
preview
at
the
moment.
We're
working
to
get
that
completed
and
making
sure
that
you
can
actually
upgrade
your
guests
in
a
very
seamless
way.
S
S
One
last
thing
I
want
to
talk
about
is
sandbox
containers
right
now
run
on
bare
metal
that
level
of
isolation
is
available
for
on-premise.
We
want
to
make
sure
that
that's
available
on
any
footprint,
so
we're
going
to
start
with
a
developer
preview
on
AWS
we'd
be
very
interested
if
you're
in.
If
you
want
to
try
that
out
and
give
us
feedback
that
will
actually
help
Drive
our
future
product
Direction.
O
O
So
as
always,
we
try
to
align
that
with
the
openshift
releases
and
what
we're
doing
here
is
adding
in
support
for
Google
gcp,
so
all
that,
wonderful,
goodness
that
you've
experienced
on
the
other
platforms,
you
can
now
have
on
Google
as
well
Justino
on
this
thing
that
we're
only
supporting
the
Windows
Server
2022
with
the
appropriate
patch
that
I'm
not
going
to
bore
you
by
reading
out.
Y
Hello
yeah,
we
I,
don't
think
the
general
availability
of
the
kernel,
Millennial
management
operator,
also
called
kmm.
So
kmm
is
a
day
to
operator
helping
Partners
to
enable
Hardware,
so
it
could
be
AI
advisorator
for
training
or
Telco
layer,
1
accelerators,
so
km
is
Upstream
in
kubernetes
signal.
So
it's
an
enabler
for
new
stack
new
hardware
that
partner
wants
to
to
enable
quickly
it
can
be
used
permanently
or
for
a
transition
period
waiting
for
the
driver
to
get
Upstream
entry
on
Downstream
inbox,
so
kmm
can
build
Stein
unload
kernel
drivers.
Y
It
could
be
used,
for
example,
to
for
UF
bicycle
boot
with
signatures
on
the
economics
enable
device
plugins,
so
Cayman
supports
loading
device
VMware,
so
that
can
be
specific
to
Kernel
modules
based
on
a
regular
expressions.
So
km
can
manage
all
the
life
cycle
of
these
Canon
modules
and
it's
replacing
the
component
SEO.
Y
The
K
name
on
the
high
virtual
kit
are
going
GA
on
the
feature.
A
sub
feature
of
kmm
for
urban
spark
is
tech.
Review
on
on
will
be
sooner
here,
so
you
have
here
an
example
of
using
a
kmm
operator
on
this
example.
Is
loading
building
for
some
drivers
and
understood
loading
additional
drivers?
Y
So
it's
really
a
simple
enablement
for
partner
to
to
enable
their
accelerated
stack,
so
kmm
is
falling
under
the
third
participate
policy,
so
it
means
that
these
kernel
modules
have
to
be
certified,
but
the
support
of
third-party
kernel
modules
are
not
supported
by
Reddit.
We
would
have
to
be
supported
by
the
partner
who
are
building
this
type,
so
I
manager
to
Tony.
Thank
you.
Z
Thank
you,
I'm,
going
to
introduce
a
simplified
process
to
generate
operator,
catalogs
the
whether
you
are
a
openshift
user
or
certified
Partners.
Releasing
your
operator
to
openshift
with
semantic
versioning
becomes
a
lot
easier.
As
you
can
see.
In
this
example,
a
new
catalog
template
is
introduced
where
you
can
easily
see
and
add
your
release
versions.
Opm
will
process
this
template
and
generates
the
lower
level
replace
and
Skip
attributes
behind
the
scene.
For
you,
Channel
names
are
auto-generated,
so
you
can
easily
deliver
releases
into
table
fast
or
candidate
channels.
Just
like
openshift.
Z
You
are
not
required
to
maintain
the
update
graph
manually
release
with
higher
version
numbers
will
really.
You
will
replace
the
older
ones
cross
Channel,
update,
Azure,
also
Auto
generated
according
to
the
best
practices.
You
can
store
this
template
in
the
git
repositories.
What
it
means
is
publish
a
net
new
release
or
release
in
a
patch
for
an
older
version
becomes
an
easy,
one-liner
change
in
this
CML
file.
This
is
developer
preview
in
4.12,
so
we
will
love
your
feedback
next
I'll
hand
it
over
to
Gregory
to
talk
about
storage.
M
Thanks
Tony
and
hello,
everyone
full
12
brings
a
couple
of
interesting
updates
on
the
storage
side.
Let's
have
a
look
at
them,
so,
starting
with
cloud
provider
CSI
drivers,
we
are
adding
Tech
produce
support
for
Google
Cloud
file,
store,
allowing
openshift
cluster
running
on
top
of
gcp
to
consume
file,
backed
storage
with
rwx
access
mode
and
five-star
use
NFS
protocol
underneath
for
your
information
and
next
update,
is
around
CSI
migration.
In
411,
we
announced
GA
for
Azure
disk
and
openstack
Cinder
info
12.
M
We
are
adding
AWS,
EBS
and
GC
disk
the
list
of
fully
supported
CSI
migrations.
As
a
reminder,
a
CSI
migration
works
as
on
on
the
fly
in
memory,
translation
layer,
it
doesn't
involve
any
data,
migration
or
any
manual
intervention
from
admins
or
user
is
transparent
and
enabled
by
default
to
enter
a
next
slide.
Please.
M
So
we
would
like
to
take
the
opportunity
to
give
an
important
heads
up
on
cluster
that
are
running
on
top
of
this
sphere
as
CSI
migration,
for
this
view
will
be
supported
in
ocp-413,
VMware
recommends
to
run
vsphere
702
before
enabling
migration.
For
this
reason,
upgrades
to
413
will
be
blocked
until
the
vsphere
environment
is
running
version
702
on
your
in
the
same
vein,
cluster
that
are
currently
using
a
third
party
vsphere
CSI
driver,
because
two
drivers
cannot
run
at
the
same
time
and
for
red
app
to
properly
support
CSI
migration.
M
We
are
asking
customers
to
replace
the
third
party
CSI
driver
by
the
one
shipped
with
openshift
switching
drivers
that
not
involve
any
downtime
data
loss
of
performance
issues.
Next
slide.
Please.
M
Another
update
on
the
VMware
side.
We
are
actually
seeing
support
for
vsphere
CSI
topology
awareness.
This
allows
operators
to
create
zones
across
multiple
vsphere
cluster
and
ensure
that
the
PVS
are
stored
into
two
same
data:
storage,
Zone
as
the
Walker
running
on
the
Pod,
and
it's
quite
useful
for
defining
failure,
domains
and
making
sure
that
storage
remains
local
to
that
zone,
and
this
is
currently
implemented
as
a
day
two
manual
operation
in
413.
We
are
planning
to
improve
the
operator
experience
with
IPI
integration.
It
automates
the
configuration
through
the
installer
next
slide.
Please.
M
Now
into
what's
new
in
should
be
odf.
M
All
right
anyway,
so
ldm
we're
happy
to
announce
the
general
ability
of
lvm
storage,
previously
known
as
ogf
ldm
operator.
This
solution
available
for
single
node
openshift
is
based
on
the
triple
lvm
CSI
Upstream
project.
It
enables
a
feature-rich
block
and
find
local
storage
management
with
features
such
as
think
body,
learning,
snapshots
and
clones
all
backed
with
by
the
well-known
logical
volume
managership
with
Rel.
M
It
was
it's
worth
noting
that
this
technology
was
previously
included
in
openshift
data
Foundation
as
technology
preview
touching
for
12.,
it's
now
very
available
to
every
Sno,
openshift
customer
odf
or
not.
However,
reinstall
from
a
previous
stack
preview
version
is
necessary.
There
is
no
upgrade
path
from
PGA
version
and
that
is
it
for
storage
ending
hover
to
Frank.
Oh
now
we
have
a
DF
okay,
which
all
right
so
what's
new
nodef.
So
in
this
release
we
are
promoting
the
Metro
gr
GA.
M
This
solution
use
synchronous
data
replication.
The
original
Dr
solution,
which
is
asynchronous
letter
application,
has
been
improved
with
support
for
otherwise
file
volumes.
In
addition
to
block
both
Aztec
preview,
we
expanded
our
KMS
support
to
additional
vendor
that
use
km,
IP
like
Telus
and
other.
We
also
have
ipvc
single
stack
support.
M
Ipv6
dual
stack
is
still
in
depth
preview
and
planned
for
GA
indoor
gf-413.
Finally,
we
are
adding
death
preview
support
for
FML
inline
volumes
as
well
as
non-residental
storage
class
that
relies
on
a
single
replica
storage
pool,
and
now
that
is
it
for
storage
handing
over
to
Frankfort
Telco
5G.
AA
AA
AA
AA
AA
These
new
feature
drastically
reduces
the
installation
time
by
pre-downloading
the
installation
artifact
at
the
factory,
so
the
technician
at
the
far
Edge
side
can
rack
cable
and
power.
The
server
then
ACM
head
cluster
running
on
his
in
a
central
data
center
connect
to
the
single
node,
openshift
and
Trigger
the
installation
that
use
free
stage
artifact
instead
of
downloading
them,
making
the
whole
procedure
very
fast.
Thanks
back
to
YouTube.
A
Thank
you
Frank.
Thank
you,
my
Ubuntu
PM
team.
You
know
and
thank
you
all
for
listening.
You
know,
thanks
for
all
your
questions
and
any
answered,
questions,
we'll
answer
them
offline
have
a
great
rest
of
the
year,
the
rest
of
the
quarter
and
we'll
see
you
again
soon.