►
From YouTube: AMA with Red Hat Engineers, Project Leads & Guest Speakers - OpenShift Commons Gathering 2022 Spain
Description
AMA with Red Hat Engineers, Project Leads & Guest Speakers - OpenShift Commons Gathering 2022 Spain
OpenShift Commons Gathering Kubecon EU
May 17, 2022 Live from Kubecon EU in Valencia, Spain
Full Agenda here: https://commons.openshift.org/gatherings/OpenShift_Commons_Gathering_at_Kubecon_Europe_2022.html
Learn more at: https://commons.openshift.org
A
All
right,
it's
pretty
amazing
that
we
have
so
many
red
hatters
here
so
take
advantage
right.
This
is
my
favorite
part
of
openshift.
Commons
is
the
ama,
so
we're
going
to
do
a
quick
round
of
intros
and
then
we'll
get
your
questions
answered
all
right,
so
I'm
karina
angel.
I
am
one
of
the
openshift
product
managers
and
my
focus
area
is
cloud
packs.
If
you've
heard
of
our
partner
ibm
right,
we
have
a
collection
of
software
called
cloud
packs,
and
I
focus
on
that
as
well
as
upstream
projects.
B
B
C
A
J
C
Christian
and
vadim
we
sent
them
off
to
go
talk
to
one
of
the
other
upstream
projects.
However,
I
have
been
working
in
the
okd
world
for
oh,
I
think
I
have
a
microphone,
I'm
I'm
the
one
person
who's
still
mic'd
up.
I
like
to
call
okd
a
sibling
stream.
C
Okay,
so
it
is
built
on
the
same
release
process
as
by
the
openshift
ci
cd
build
process,
so
it
is
kubernetes
is
really
our
upstream
for
for
openshift
and
okd
and
all
of
the
other
things
that
we
build
with
openshift,
but
it
so
I
would
call
it
a
sibling
stream
and
we
have
a
really
great
relationship
with
the
fedora
core
os
team
and
we
do
most
of
the
testing
and
deployment
within
the
okd
working
group
on
the
different
variations
of
where
people
like
to
deploy
it.
C
K
Hello
thanks
for
all
the
presentation
very
interesting
today
I
have
some
question
about
migration,
toolkit,
conveyor
and
also
security
about
six
star.
Does
all
these
products
are
supported
in
disconnected
mode
and
are
there
any
requirements
or
for
that.
K
Yeah
but
even
applications
and
containers.
H
Yeah
I
can
go,
I
can
go
for
my
migration,
so
in
the
khmer
project
I
mean
it
depends
where,
where
your
origin
is
what
what
is
your
source
for
migrating
applications
into
into
kubernetes,
I
don't
know
if
you
would
be
interested
in
application
coming
from,
let's
say
legacy
classic
runtime
platforms
or
what
what
kind.
K
H
Okay,
so
for
for
applications,
it
doesn't
matter
that
much
the
the
deployment
model
that
you
might
have
for
for
kubernetes.
We
focus
on
assessing
the
application
portfolio.
That
would
be
a
questionnaire
dreaming
assessment
and
then
on
the
analysis,
we
will
be
working
mostly
with
source
code
and
and
binaries.
So
we
wouldn't
be
that
interested
on
the
on
the
platf
platform,
sorry
platform
itself,
then,
for
example,
with
move
to
cube,
you
would
be
able
to
based
on
the
application
type
that
that
you
are
migrating.
H
Let's
say
you
have
a
java
application
running
on
tomcat
move
to
cube
is
able
to
understand
what
kind
of
application
you
are
based
on
the
on
the
dependencies
and
generate
the
deployment
manifest.
So
not
so
much
about
the
kind
of
deployment
model
you
might
have.
I
don't
know
if
that
answers
your
question
perfect.
Yes,.
K
H
Sorry,
maybe
yeah
yeah
yeah
yeah
that
shouldn't
be
a
problem,
because
the
knowledge
base
is
building
in
the
process
and
project
itself,
so
we
wouldn't
need
a
direct
connection
to
the
internet.
Everything
is
pretty
self-contained
or
can
work
in
self-contained
mode
right
now,
I'm
not
sure
about
the
the
sign.
I
know
the
current
design
for
the
ai
thingy
powering
the
ibm.
Research
tooling
is
self-contained,
but
I
do
know
that
they
were
thinking
about
having
some
central
knowledge
base
or
something
like
that.
H
G
Yeah
so
so
you're
asking
specifically
what
what
can
I
use
with
sigstor
in
a
disconnected
mode
right.
So,
as
luke
said
earlier,
today,
sigstor
has
kind
of
multiple
components
so
creating
you
know
using
a
open
shift
pipeline
and
connecting
the
acs
scanner
connecting
with
quay
or
what
other.
Whatever
registry
you
have.
G
All
of
that
can
be
done
in
a
disconnected
environment.
Tecton
chains
can
run
in
a
disconnected
environment,
the
acs
image
admission
controller
to
validate
the
signature.
All
of
that
can
be
disconnected
a
piece
of
the
puzzle
that
right
now
we
have
not
yet
shipped
an
instance
of
recore,
which
is
the
signature,
transparency
log.
G
So
if
you
need
to
validate
that
signature,
if
you,
if
you
need
to
check
you,
know
audit
the
transparency
log
right
now,
that's
not
something
that
would
be
available
on
premises.
It
is
in
our
plans
to
make
it
you
know
to
ship
something
that
then
you
could
deploy
yourself
to
ship
a
recore
operator.
You
could
deploy
an
instance
of
recore
yourself,
but
right
now
the
recore,
I
think,
there's
an
instance
available
from
linux
foundation,
but
that
is
an
online
access.
G
L
Hi,
sorry,
if
we
wanted
to
start
with
a
like
we've,
we've
begun
our
cloud
journey
and
we've
built
some
arrow
clusters
and
we've
got
our
dev
and
our
prod,
and
we
want
to
start
creating
our
dr
instance,
which
we've
already
deployed.
L
But
how
do
we
begin
that
journey
of
actually
creating
that
replication
in
between,
let's
say
our
prod
cluster
and
what
we
want
to
set
up
as
our
dr
cluster
in
case,
you
know
a
disaster
actually
hits
and
we
need
to
start
I'll
say,
I'm
pretty
sure,
there's
some
documentation
on
it.
But
if
you
were
to
just
give
some
general
advice
of
where
you
begin
that
journey,
that
would
be
helpful.
E
The
question
is
about
disaster
recovery
and
strategies,
strategies
with
disaster
recovery
with
openshift
okay.
So
this
is
a
difficult
question,
because
openshift
is
designed
for
applications
that
you
can
distribute
right
so,
most
of
the
time
what
you
do
is
you
have
the
disaster
recovery
layer
above
openshift
at
the
application
layer,
and
then
you
will
take
care
of
your
application
being
properly
distributed.
Having
said
that,
obviously
openshift
has
in
every
cluster
mechanisms.
E
For
you
know,
if
one
node
goes
down,
then
there's
an
evacuation.
Another
node
can
take
over
the
workloads.
That
happens
that's
by
default
that
you
have
it.
I'm
not
sure.
If
your
question
is
more
about,
I
think.
L
I
That's
cloud
exclusive,
but
it's
not
too
exclusive.
Now
it
depends
hot
hot
hot
cold
and
that's
where
you
start
from
now
the
cluster
itself.
It
depends
on,
let's
start
by
the
simplest
one:
hot
cold
right,
your
cluster.
Will
you
have
a
definition
for
your
cluster.
For
instance,
in
the
cloud
you
can
have
an
arm
template
to
deploy
your
azure
data
control,
cluster
or
terraform,
or
an
ansable
book
whatever.
I
So
that's
where
you
start
with
so
you're
going
to
have
that
ansible
book
that
is
deployed
in
a
different
repository
in
a
different
region
right
that
you
can
access
in
case
of
disaster.
So
that's
the
coldest
possible
thing
that
you
can
do.
You
can
have
the
cluster
deployed
as
well
in
the
different
region.
That's
a
bit
hot
right!
Then
we're
going
to
move
to
the
applications
now
on
the
application
ends.
I
The
question
is:
do
you
have
a
stateful
or
a
stateless
cluster
and
a
stateless
cluster?
That's
the
easiest
possible
thing
right!
So
I
have
my
pipeline.
It
can
split
the
applications
into
two
clusters
right.
So
I
have
this
my
front-end
application
that
will
be
deployed
in
region
one
and
this
and
the
same
time
we
can
deploy
it
in
region.
Two
then
I
need
to
have
a
global
load.
Balancer
that
can
point
to
two
clusters.
Assuming
I
need
to
you
know,
have
an
active,
active
setup,
for
instance
right
now,
when
it
comes
to
the
safety
workloads.
I
This
is
where
things
get
interesting.
Don't
do
stressful
workloads
unless
you
need
them
right
inside
the
cluster
so
and
the
cloud.
What
we
advocate
customers
to
do
is
hey,
go
and
use
a
managed
service,
because
that
managed
service
has
the
underlying
backup
and
disaster
recovery
and
replication
underlying
in
it
right.
So
you
just
point
to
my
sql
server
or
my
sql
azure,
my
sql
service,
that
does
logical
replication
to
a
different
region,
and
then
your
data
will
be
there
and
then
your
front-end
application
that
live
in
an
openshift
cluster.
I
You
just
need
to
deploy
them
in
the
other
end.
Now,
if
you
have
a
stateful
cluster,
for
instance,
you
have
my
sequel
running
in
the
cluster,
the
faults
that
cockroach
they
figured
it
out.
They
have
logical
replication
on
the
application
level,
but
that's
not
the
case
for
the
rest
of
the
things
like
my
sql
or
postgres
or
mongodb,
and
so
on.
There
you're
going
to
have
to
have
a
backup
and
resource
solution.
I
For
instance,
right
one
of
the
most
popular
ones
in
the
community
is
valero
right,
so
you
can
snapshot
that
underlying
data
and
then
restore
it
in
some
other
region
and
you
know,
keep
restoring
it
in
some
other
region
and
you
know
point
the
other
cluster
to
that
database
right.
So
that's
that's
how
you
think
of
it
and
it's
it's
a
matter
of
cost
and
complexity.
You
can
go
for
active,
active
and
that's
going
to
be
the
most
expensive
one.
I
D
There
is
this
question
about
data
restoration,
data
movement
right
so
on
the
roadmap
this
morning
you
may
have
already
heard
this,
but
red
hat
acm
will
have
a
orchestrated
failover
process
this
year
that
relies
on
background
asynchronous
data,
replication
and
movement
of
the
application
definitions
themselves
to
a
dr
cluster.
So
this
is
a
going
to
be
a
first-class
concept
in
acm,
which
will
rely
on
acm's
native
capability
to
apply
and
manage
application
manifests
across
clusters.
D
It's
across
the
management
stack
after
all,
and
it
also
integrates
with
the
wall
sync
project,
which
is
how
we
asynchronously
in
the
background,
replicate
data
between
persistent
volumes
of
different
clusters,
and
this
is
what
you
would
set
up.
It
will
continuously
run
the
background.
There
is
obviously
a
recovery
point,
a
recovery
time
objective,
because
it's
asynchronous
right.
You
know
there
are
costs
to
these
things.
D
So,
in
a
dr
recovery
case
being
so
rare,
it's
probably
acceptable
that
there
is
some
loss
of
data,
but
eventually
it
will
be
a
feature
in
acm
to
be
able
to
carry
over
the
workload
if
the
cluster
has
failed
and
restarted
in
the
dr
cluster
in
a
very
orchestrated
fashion.
As
a
first-class
acm
concept,.
M
Hi,
I
have
a
question
about
the
conveyors.
You
know
africa
converting
application
to
the
modernizing
applications.
Do
you
have
any
plan
to
support
cloud?
Foundry
application
because
you
know
hcl
guys
are
offering
support,
are
offering
that
service
as
a
commercial
services,
but
for
the
compare
project.
C
H
Yes,
their
tool
is
based
on
move
to
cube,
so
they
already,
they
are
most
of
their
tool,
is
already
based
on
on
conveyor
projects.
So
what
what
they're
doing?
What
we're
trying
to
do
with
conveyor
is
try
to
address
all
the
different
scenarios
out
there
with
first
of
all
answering
your
question.
Yes,
we
do
support
that
migration
path.
Move
to
cube,
supports
that
and
hcl
is
using
much
move
to
cube
to
power
their
tooling.
Oh.
M
H
B
So
once
you
run
the
move
to
queue
command
line,
you
got
a
bunch
of
the
yml
file
based
on
how
to
deploy
the
manifest
and
kubernetes
like
a
deployment
and
a
services.
Something
like
that.
You
can
also
generate
a
bunch
of
the
ml5
for
openshift
cluster
as
well.
So
it's
still
cli
stuff,
but
we
are
also
working
on
to
make
it
fancier.
H
And
to
add
some
some
stuff
on
top
of
that,
we
are
currently
working
on
integrating
the
move
to
cube
experience
on
top
of
tackle,
so
you
will
be
able
from
your
application
portfolio
generate
the
deployment,
manifests
that
move
to
cube,
generates
and
have
them
fully
integrated
within
your
git
repository.
So
the
whole
idea
is
for
a
developer
to
push
a
button
and
have
have
everything
in
the
repository
for
them
to
start
making
changes
and
migrating
the
application
and
deploying
it
on
the
target
platform
from
day.
One.
C
N
Go
hello,
another
question
on
conveyor
project:
okay,
do
you
support
other
languages
than
java
node.js.
H
That's
that's
the
the
usual
one
for
the
moment,
no
for
the
analysis
bit
of
it.
Okay,.
N
H
Assessment
is
language
agnostic,
but
for
the
analysis
bit
of
it
for
the
moment,
we're
focused
on
java,
but
we
would
like
to
bring
in
net
and
other
languages
as
well,
but
we're
still
waiting
to
get
contributors
that
can
provide
that
that
thing.
So
what
we
did
with
tackle
two,
for
example,
was
to
create
an
add-on
oriented
architecture
that
allows
us
to
expand
easily
what
we're
building
and
what
we.
What
we
would
like
to
have
now
is
have
other
add-ons
that
can
analyze
different
languages
rather
than
the
the
main
tackle
analysis
having
to
do
that.
H
N
E
So
at
the
moment
you
probably
were
here
this
morning
we
have
x86,
and
this
is
in
the
road
map.
I
don't
know
exactly.
We
can
check
that
out
when
this
coming.
This
is
coming,
but
the
answer
is,
we
will
allow
multi-uh
architectures
in
a
single
cluster
and
these
two
architectures
won't
be
the
only
ones
you
know
how
far
this
is
down
the
line-
and
I
don't
know,
but
this
this
will
happen.
N
And
how
does
it
relates
to
hypershift.
E
Hypershift
is
a
different
concept
with
hypershift.
What
you
will
have
is
one
cluster
that
hosts
multiple
control
planes
in
one
cluster
and
then
what
you
really
want
is
different
worker
nodes
associated
with
every
control
plane
working
against
that
cluster
right.
E
You
might
I'm
thinking
about
this
now
for
the
first
time,
you
may
be
able
to
combine
that.
Why
not
because
we're
talking
about
two
openshift
features
right
so
openshift
will
support
hypershift
on
the
one
hand
and
multiple
architectures
in
one
single
cluster,
because
we
already
support
multiple
architectures.
E
On
the
other
hand,
so
you
will
have
the
two:
how
related
they
are?
I
don't
know.
Maybe
you
will,
because
you're
gonna
have
many
more
ways
to
architect
your
you
know
your
topology
to
design
your
cluster,
but
it's
two
separate
topics
at
the
moment.
Okay,.
F
F
Thank
you,
I'm
representing
representing
ibm
client
engineering
but,
as
you
might
see,
I'm
former
red
cutter
so
as
far
as
remember
the
openshift
4.1
was
released
like
three
years
ago.
So
my
question
very
simple:
any
plans
for
the
openshift
five.