►
From YouTube: OpenShift Coffee Break: OpenShift Disconnected on AWS
Description
Get your espresso ready to kick off the EMEA OpenShift Coffee Break together with Natale Vinto and Tero Ahonen . We will start this bi-weekly show with an OpenShift Disconnected Series: Restricted Network on Public Clouds, sharing experiences and best practises on how to run prod grade OpenShift on AWS with Rafael Cardona (@rcardona_cloud), Hussain Hamid (Fiserv) and Brian Olmeda Morgade (IBM)
A
Yes,
we
have
the
starting
soon
banner.
B
C
B
B
B
B
Let
me
do
that.
Oh
when,
when
this
usually
the
mic
is
always
open.
If
you
hear
any
noise
in
the
nearby,
please
switch
off
your
mic
and
then
you
can.
You
can
open.
Okay,
I'm
gonna
switch
to
the
bumper.
B
Welcome
welcome
everyone
to
our
first
openshift
coffee
break,
show
here
at
openshifttv.
My
name
is
natalia
vinto,
product
marketing
manager
for
openshift,
and
I'm
pleased
today
to
present
everyone
in
this
session
of
today.
But
let
me
introduce
the
other
one,
just
fire.
You
want
to
start.
A
So
hi
everyone
welcome
to
our
first
edition,
indeed
of
the
openshift
coffee
break.
So
here's
to
everyone
connected.
Thank
you
very
much
natalie
for
hosting
this.
So
this
will
be
a
bi-weekly
event
so
meeting
every
two
weeks
and
I
will
be
automating
with
natalie
to
host
this
show
and
have
a
esteemed
guests
talk
about
experiences
with
openshift.
A
B
So
thanks
jafar,
so
before
introducing
our
fantastic
guest
today
on
this
very
interesting
topic,
which
is
disconnected
on
public
cloud
with
openshift,
so
let
me
introduce
a
little
bit
this
show,
so
the
idea
around
the
show
was
to
having
like
a
kind
of
a
coffee
break
like
we
had
in
the
office
where
we
were
talking
about
many
things
that
we
were
relaxing
every
summer
for
coffee.
So
I
would
like
to
take
some
coffee
together
with
you.
B
Everyone
in
the
world,
together
talking
about
cool
topics
and
technology,
feel
free
to
ping
us
in
the
chat
to
to.
If
you
have
any
question
for
today,
at
the
other
show
as
jafar
was
saying,
this
is
a
bi-weekly
show
every
wednesday
morning
at
10
am
set.
If
you
are
in
uk,
it's
nine,
if
you
are
in
the
eastern
eleven
one
hour,
show
and
I'm
pleased
today
to
introduce
our
guest.
D
I
have
a
reasonable
good
experience
with
kubernetes
about
three
years
deploying
and
playing
with
kubernetes
all
around
europe.
Today
I
came
to
talk
a
little
bit
about
my
experience,
deploying
the
open
shift
in
a
highly
secure
environment.
D
There
are
many
many
features
interesting
to
allow
us
to
to
unleash
the
enterprise
capabilities
of
kubernetes
in
this
fashion
and
well,
I'm
very
happy
to
to
jump
into
those
contents.
In
a
few
minutes
nice
to
meet
you.
B
Thank
you.
Thank
you
rob
so
we
have
other
two
guests
today,
because
we
want
to
talk
about
also
use
cases
and
I'm
pleased
to
start
introducing
you,
hamid
and
brian.
You
want
to
start
folks.
C
Yeah
sure
my
name
is
hamid
hussein.
I
work
for
a
company
called
fiserv,
I'm
an
architect,
so
my
main
job
with
fileserv
is
evaluating
new
tools
and
frameworks
and
then
onboarding
them
onto
our
development
teams.
My
experience
with
openshift
has
been
the
very
new.
This
is
the
first
time
I've
worked
on
openshift
with
red
hat
and
ibm.
C
My
background
has
been
enterprise
application
development,
mainly
on
the
java
applications.
I'm
I
come
from
investment
banking.
It
background
I've
been
working
there
for
almost
20
years,
and
I've
moved
to
fiser
a
couple
of
years
back
and
into
an
architect
role.
A
You
very
much
natalie
so
sorry
to
interrupt
so
it
seems
the
event
is
not
streaming
yet
on
youtube.
Can
you
please
check.
B
So
yeah
and
while
I
check
this,
we
I
think
we
can,
because
I
see
it
started,
but
we
can
continue
with
the
presentation.
So
if
you,
if
you
want
to
start
presenting
yourself.
B
Okay
cool,
so
thanks
amit
for
the
for
your
presentation,
brian,
you
want
to
go
ahead.
Yep,
hello,.
E
Good
morning
my
name
is
esperan
almeida.
I
am
a
customer
success
manager,
architect
in
ibm.
I
usually
help
clients
with
adoption
of
open
ships
and
cloud
packs
in
the
pieces
in
in
this
case,
we
were
working
with
hamilton
and
rough,
quite
quite
close
to
to
install
openshift,
disconnected
and
and
on
top
of
that,
having
a
disconnect
installation
around
the
the
cloud
pack
for
for
integration,
especially
one
one
of
the
product
data
power
and
yeah.
It
was
was
a
pleasure.
Thank
you
for
for
inviting
me
and
I'm
looking
forward.
B
Thank
you.
Thank
you,
brian.
Thank
you,
hamid.
Thank
you
raf.
So
today's
topic,
we
invited
you
because
I
think
you
can
share
with
us
some
very
cool
example
on
how
to
do
production
grades
openshift
cluster,
on
public
cloud.
What
does
it
mean
production
grade?
So
we
understand,
and
we
know
that
openshift
is
very
easy
to
install
very
easy
to
use,
but
how
how
to
make
it
production
ready.
So
the
title
were
about
disconnect.
What
does
it
mean
disconnected
rough?
You
know
in
a
public
cloud.
D
Okay,
when
we
talk
about
security,
we
we
already
understand
that
this
is
probably
the
the
biggest
the
biggest
issue
and
the
paramount
of
all
enterprise
environments.
A
customer
got
different
needs
and
with
their
needs,
we
need
to
try
to
adapt
our
technology
to
those
to
address
those
concerns.
D
Customers
want
some
to
to
be
able
to
store
their
their
application
infrastructure
and
into
into
environment.
That
is
completely
isolated
from
the
outside
world,
but
at
the
same
time
they
want
to
enjoy
the
enterprise
capabilities
that
bring
us
to
a
to
a
dual
world.
If
we
want
to
automation,
we
need
we
need
to
be
able,
or
the
customer
need
to
be
able
to
give
up
some
security
constraints.
D
So
while
we,
what
we've
been
working
the
last
month
heavily
in
in
the
disintegration,
is
try
to
to
keep
using
those
amazing
level
of
automation
that
we
are
now
enjoying
in
with
enterprise
kubernetes
with
openshift,
but
at
the
same
time,
addressing
those
concerns.
D
The
two
approaches
for
that
is
a
disconnected
installation,
private
installation,
a
private
vpc.
They
might
sound
the
same,
but
actually
they
are.
They
are
not
when
we,
when
we
talk
about
the
pri
installation
of
privacy
pc,
it
really
means
installing
or
openshift
with
all
the
capabilities
and
automation
of
the
installer,
but
doing
that
through
a
proxy.
But
the
cluster
is
still
isolated
from
the
outside
world,
but
the
access
to
the
internet
is
needed
and
that's
what
the
the
challenge
came.
D
So
you
know
we
want
to
store
this
this
this
platform,
but
we
cannot
use
internet,
say:
okay,
no
athlete
during
installations.
Okay,
the
other
approach
could
be
disconnected
but
disconnected
reduce
a
bit
a
bit.
The
the
facility
of
the
installer
to
do
all
the
magic
so
create
the
infrastructure,
configure
configure
all
the
environment.
So
what
what
we
did
was
to
use
the
bit
and
byte
of
the
installer
in
a
completely
isolated
world
and
that's
what
we
did.
D
B
D
D
Yes,
it's
small,
because
it's
reduced
a
lot
all
the
tasks
we
suppose
to
to
roll
out
in
order
to
have
this
type
of
deployment.
Can
I
explain
a
little
bit
what
we
want
to
do.
I
just
mentioned
that
we
want
to
do
a
private
installation
in
a
mirror
environment.
D
This
is
very
important
to
mention
that,
because
of
doing
that,
we
are
actually
telling
the
store
to
behave
in
a
in
a
way.
That's
not
no
is
not
meant
to
be,
but
at
the
same
time
we
are
using
all
the
capability
and
all
the
intelligence
behind
this
dollar.
D
The
tone
is
a
very
powerful
binary
because
it
reduced
the
complexity
of
preparing
all
the
infrastructure
around
kubernetes
and
at
the
same
time
as
traps
that
completely
provided
you
with
all
with
with
already
a
install
cluster
okay,
how
the
architecture
of
the
of
the
deployment
of
today-
and
we
can
start
by
by
defining
what
are
the
components
or
the
minimum
necessary
component
to
have
a
openshift
installation.
D
We
need,
of
course,
the
control
plane,
that's
composed
by
at
the
moment,
by
a
default
of
three
three:
a
master
nodes
and
two
compute
nodes
for
the
compute
area
and
those
are
the
the
the
core
of
the
cluster
but
around
the
cluster.
We
need
other
components
it
might
vary
if
this
is
a
bare
metal
of
is
a
private
or
private
cloud,
but
in
general
it's
a
it's,
it's
more
or
less
a
a
key
parade
in
in
all
type
of
environment,
and
we
need
always
a
low
balancer.
D
The
newest
version,
4.6
and
and
4.7
are
a
little
more
flexible
about
the
requirements,
but
still
we
need
a
load
balancer
one
who
provides
the
traffic,
a
traffic
management
node
by
traffic
distribution
to
the
internal
api
or
the
the
internal
api
of
the
api
server.
The
other
is
the
public
aim,
no
public,
but
an
user
api
call.
That
is
the
api
server
and
the
other
is
the
the
end
point.
That
is
both
the
router.
D
Who
is
the
one
who
who
they
manage
the
traffic
directly
to
the
to
the
english
router
of
their
applications
and
the
other
component
very
important
is
the
s3
s3
and
the
one
who
want
to
be
used
not
only
to
store
the
applications
wall
loads
in
case
it's
needed,
but
in
this
case
the
installation
it
will
save
the
images
of
the
bootstrapping
and
will
support
all
the
back
and
forth
of
data
during
the
installation
and
route
53.
D
That
was
also
a
very
interesting
issue,
because
before
we
need
to
deploy
our
own
dns,
we
weren't
able
to
use
route
53
because
there
were
source
contraints,
but
in
the
latest
installer
we
can
do
that.
We
can
do
refrigeratory
in
in
a
disconnected
environment
in
the
way
we
desire,
moreover,
how
our
bpc
will
look
like
a
this
is
the
ideal
world.
We
got
a
three
private
subnets,
three
different
availability
zones.
D
B
Let's,
let's
do
this
question
so
why
we
have
an
internet
gateway
there
if
it
is
disconnected
yeah.
D
Very
good
question
there
yeah
the
question
that
all
our
our
customers
ask:
okay,
we
want.
We
want
completely
disconnected
why
we
have
that.
Well,
at
the
end,
we
need
to
download
the
the
images
we
need
to
mirror
the
installation
images,
because
the
installation
takes
place
by
deploying
those
images
and
spinning
up
the
container,
so
we
can
escape
that.
One
way
to
we
avoid
this
in
several
use.
Cases
was
to
create
the
the
mirror
registry
in
a
less
a
protected
environment.
D
Then
a
we
mirror
all
the
images
we
do.
We
did
all
the
testing
and
then
we
create
an
an
image
out
of
that
a
server,
and
then
we
move
that
is
it
that
to
an
another
easy
to
eat
in
a
privileged
environment.
So
we
did
an
error
gap.
That's
one
of
the
of
the
of
the
modes.
D
The
other
mode
would
be
to
to
use
a
bbc
p
ring
that
one
bpc
is
completely
isolated
from
the
outside
world
and,
however,
is
a
is
connected
to
another
vpc
himself,
like
a
gateway
to
to
download
those
images
that
vpc
is
less
protected.
The
the
connection
to
internet
is
is
a
is,
is
no
complete
enable
by
by
the
vpc
itself,
but
the
it
contained
the
the
bastion
on
all
the
mirror
images.
D
But
the
the
the
use
case
I
want
to
bring
up
today
is
none
of
them.
The
one
that
I
want
to
bring
up
today
is
the
one
who
use
a
a
direct
connect.
Basically,
the
the
the
vpc
exists
in
the
in
the
customer
premium.
It's
completely
isolated,
but
instead
of
being
connected
to
that,
what
directly
is
connected
directly
to
the
aws
premises
and
they
got
a
certain
level
of
access
to
the
outside
world,
but
that's
something
that
the
customer
can
self
a
determined.
D
D
I
told
you
that
we're
gonna
be
using
a
an
existing
vpc
with
that
vpc
we
have
to
have
certain
requirements
that
have
no
very
special
requirement,
but
anyway,
you
should
be
aware,
especially
about
the
security
group
that
will
a
govern
governing
the
the
the
internal
mirror
registry,
and
here
you
can
get
if
anyone
is
interested
to
do
this
problem
in
in
a
direct
connected
environment,
the
only
thing
they
need
to
do
is
to
come
here:
change
it
internet
gateway
for
the
and
direct
the
direct
connect
gateway
changes
here.
B
D
Very
good,
as
I
mentioned
before,
what
we
wanted
to
look
with
this
type
of
deployment
is
to
to
use
the
best
of
both
worlds.
D
The
high
level
of
automation
of
this
dollar,
but
also
use
our
own
premise,
ready
to
ready
to
rock
let's
say,
but
a
the
vpc
need
to
have
a
specific
settings
to
be
able
to
hold
the
installation
so
because
the
installer
will
will
we
will
take
over
the
installation-
I
I
will
say:
okay
I'll
do
the
installation,
but
when
it
reach
the
the
vpc
it
expects
that
certain
certain
settings
are
done.
D
Those
settings
are
well
explained
the
documentation,
but
if
you
go
to
the
template
steps,
you
can
really
relate
what
exactly
it's
needed
in
a
completely
disconnected
environment.
You
you
might
replace
the
installer
in
in
the
area
of
provisioning.
You
can
do
deprivation
only
through
through
a
cloud
formation
template,
but
this
is
not
the
case.
We
got
our
vpc
already
up
and
running.
I
can
show
you.
D
D
The
the
settings,
as
I
mentioned,
is
a
easy
to
compare
with
this,
but
my
experience
with
this
is
what
customer
has
on
place.
It
never
differs
so
much
what
we
have
here.
The
transformation
template
for
this
particular
case
will
will
help
to
have
like
a
role
model
to
follow,
but
is,
is
not
part
of
the
procedure
just
to
see.
Okay,
my
my
vpc
is
able
to
handle
the
installation
we
got.
I
got
another
cloudformation
template
and
how
it
relates
to
distillation
with
security
groups.
This
platform
should
template.
D
That
they
put
here
is
a
is
quite
complete
because
it
doesn't
only
cover
the
security
groups
of
the
installation,
but
all
the
but
and
doesn't
only
cover
the
security
group
of
the
vatican
host,
but
also
the
security
group
of
the
whole
installation
for
this
installation.
D
We
don't
need
that
because
we're
gonna
leave
the
the
installer
do
the
rest
of
the
magic,
but
it's
important
that
you
that
you
have
access
to
see
how
how
it's
really
deployed
in
a
way
a
similar
way
that
installer
does
the
solid,
doesn't
use
out
the
vox,
a
cloudformation
template,
but
the
the
interaction
and
the
and
the
flow
of
how
the
components
are
being
deployed
is
exactly
the
same.
A
A
Ask
you
another
question,
please
sure,
just
to
summarize
what
you
mentioned
about
the
internet
gateway
and
the
need
to
have
all
of
the
required
images
and
components
readily
available
somewhere.
So
what
you
mentioned
is
there
are
different
options,
I'm
just
doing
a
quick
recap
for
for
for
the
people
who
are
not
so
familiar
with
with
openshift,
so
you
said
there
are
different
ways
of
getting
the
red
hat
content
to
make
it
available
offline.
A
In
this
case,
we
are
using
a
direct
connection
and
we
are
using
the
oc
command
lines
that
allow
you
to
mirror
the
content
for
a
specific
release.
So
basically
you
say
I
want
to
install
ocp
46.
Please
get
all
the
images
all
the
content,
download
that
for
me
and
make
it
available
as
container
images
somewhere
and
if
we
wanted,
we
could
have
that
downloaded
separately
on
a
different
host
and
then
copy
that
on
the
version
for
instance,
and
make
it
available
on
an
internal
registry
for
for
for
the
private
vpcs
to
be
completely
isolated.
D
B
So
we
can
see
that
topology
pretty
good.
So
thanks
tjf
in
the
chat
for
having
shared
the
link
of
this
repo,
you
can
access
to
this
repo
is
publicly
available,
so
you
can
use
it
for
your
poc
for
your
use
cases,
and
then
I
will
also.
I
would
like
also
to
know
from
hamid
and
ryan
what
they
think
about
this
kind
of
topology
and
if
this
kind
of
use
case
silver
solvent
addressed
the
their
requirement
that
they
had
for
a
production
grid.
B
So
amit,
you
say
this
is
a
was
for
an
example
for
fiser,
so
how
this
solved
your
your
your
case,
your
use
cases.
C
Yeah,
and
so
we
had
couple
of
requirements,
I
would
say
challenges
when
we
started
this
setup,
so
the
first
one
was
due
to
file
serve
policies.
We
were
not
supposed
to
use
all
of
the
aws
services,
so
we
just
like
the
route
53,
the
load,
balancer
and
the
s3.
C
We
were
forced
to
use
aws
cli
rather
than
the
aws
console,
which,
which
made
things
a
little
bit
difficult,
because
cli
is,
is
more
manual
and
is
a
lot
of
syntax
checking
and
all
that
stuff,
but
the
the
scripts
that
rafa
built
for
this.
For
this
whole
thing
in
github
that
helped
us
a
lot
because
it
had
all
the
command
that
we
needed
to
run
quickly
so
that
helped
that
helped
us
making
this
whole
thing
quicker,
yeah.
C
So
overall
I
mean,
if
the
the
whole
disconnected
mode
that
we
wanted
was
already
there
designed
by
rafa.
So
we
just
had
to
use
it,
make
small
changes
and
then
use
it
for
our
specific
installation.
B
D
Yeah
definitely
yeah
that's
a
simplified
version,
as
I
mentioned
before,
because
I
didn't
include
direct
connect.
But
if
someone
is
familiar
with
cloud
formation
and
aws,
they
know
exactly
what
they
need
to
change
and
they
can
mimic
completely
this
into
the
direct
connect
environment.
Actually
they
have
it,
which
is
something
that
came
to
my
mind
during
the
installation
there
yeah.
D
Indeed,
we
had
to
use
cli
when,
when,
when
a
product
is
it's
mature
enough
and
it's
enterprise
enough,
it
has
to
be
resilient
to
those
type
of
constraints
we
were
able
to
because
a
team
work.
It
was
a
very
few
people.
Working
together,
we
managed
to
translate
the
terraform
and
the
cloud
formation
template
into
the
cli
command.
It's
a
it's
a
comprehensive
way
to
do
that.
We
say
it
says
it's
long,
but
it
gives
you
a
complete
control
or
every
single
component
has
been
deployed
in
the
environment
yeah.
D
That
was
probably
the
biggest
constraint.
But
as
I
mentioned
as
hamid
mentioned,
we
we
are
able
to
do
this
and
we
can't
able
to
we
able
to
provide
to
provision
the
cluster
in
in
do
with
those
three
industry
fashion,
ways
that
would
be
terraform.
That
would
installer
a
cloud
formation,
template
and
also
to
the
cli
and
the
the
result
is
the
same.
B
Yeah
thanks
and
there
was
a
question
in
the
chat
on
twitch
asking:
what
about
what
does
it
mean
updating
a
disconnected
cluster
you
I
I
have
this
topology
that
looks
great
for
production
grade
using
disconnected
on
aws,
for
if
I
need
to
update
to
open
sheet
for
that
seven,
what
I
have
to
do.
D
Well,
actually,
it's
a
very
nice
question.
The
time
is
running.
I
think
I
will
do
the
mirroring,
because
you
can
see
how
easy
that
is,
and
it's
connected
to
that
question.
We
do
the
mirror
with
we.
We
mirror
to
any
any
version,
any
any
any
level.
Let's
say:
406
450
and
our
our
a
set
of
operators
would
have
a
specific
level
of
of
of
a
a
specific
level
of
release.
D
But
when
you
release
you
are
not
able,
as
a
private
holder
cluster,
you
are
unable
to
acquire
those
updates
on
the
outside
world,
so
you
need
to
re-mirror
those
content
and
you
execute
the
the
upgrade
in
the
same
fashion
that
you
need
to
just
mirror
this
activation
of
the
operators
that
you
want
to
to
upgrade.
D
That's
quite
handy
because
I
saw
the
case
that
customer
wanted
to
upgrade
only
a
section
of
their
clusters,
because
there
were
many
clusters
installed
and
they
were
able
to
have
different
mirrors,
different
versions,
and
then
they
manipulate
the
the
updates
in
the
way
that
only
specific
class
had
a
specific
version
of
the
of
the
of
the
openshift.
In
that
moment,
you
want
to
say
something.
A
Yeah
rafael,
I
was
gonna,
offer
to
explain
that
if
you
wanted
to
start
doing
things,
otherwise
you
can
carry
on
so
so.
A
Basically,
what
we
all
need
to
understand
is
that
openshift,
4
installation
and
the
whole
operation
of
the
platform
heavily
relies
on
operators
and
those
operators
use
container
images
that
are
versioned
that
are
hosted
on
the
red
hat
release
and
basically
upgrading
means
deploying
the
new
set
of
operators
running
the
containers
from
the
new
versions
and
to
do
that,
as
rafael
explained,
you
have
to
get
those
images
from
the
red
hat
registry
extract
them,
so
we
provide
the
tooling
for
that
host
them
on
a
private
registry
and
then
trigger
the
upgrade
meaning
that
you
will
replace
the
operators
that
are
in
place
with
the
new
images.
A
A
B
B
Everyone
can
yeah.
I
would
like
also
to
hear
from
brian
what
he
think
about
this
disconnected
on
public
cloud
experience,
because
you
know
it's
when
you
think
about
public
cloud.
You
think.
Oh,
it's
all
on
internet.
No,
it's
all
connect.
So
what
brian?
What
do
you
think
about
specific
use
case?
Do
you
see?
E
I
think
thank
you
I
I
think
now
nowadays,
customers
they
are,
they
are
turning
to
to
this
disconnected
mode,
and
I
can
see
yeah
openshift
has
improved
a
lot
and-
and
we
are
pointing
to
to
this
direction
and
and
is,
I
will
say,
the
the
most
common,
especially
in
in
private
industries
in
in
banking,
in
I
will
say
government
they
are
all
going
for
for
this
disconnect
installation
and
yeah
as
ralph
and
family
is
highlighting
here.
Of
course,
you
you
can
have.
You
can
see
a
lot
of
challenge.
E
You
can
find
a
lot
of
challenge,
but
the
the
good
thing
or
the
maturity
of
offensive
that
you
can.
You
can
find
work
workarounds
for
for
everything
and-
and
that
is
probably
the
key
here
and
yeah.
As
I
say,
on
top
of
that
no
difference
once
that
is
installed
no
difference
on
openshift,
you
can
run
the
sense
of
work
in
disconnected
mode
or
or
in
public
mode.
That
is
yeah
helping
a
lot
in
in
our
case,
for
for
running
the
the
software
that
we
install
on
top
of
opencv
yeah.
B
Yeah,
I
agree
on
that
and
to
link
to
this
discussion.
There
was
a
question
in
the
chat
say:
hey
do
we
have
to
enable
telemetry
in
the
disconnector
to
to
have
the
support,
so
the
telemetry
service
is
only
needed
for
having
insight
from
red
dot
like
hey.
Your
cluster
is
running
under
pressure,
hey
you,
you
should
fix
this,
but
it's
not
really
needed
and
I
imagine
for
a
disconnected
cluster.
The
telemetry
is
not
active
unless
only
that
specific
host
is
enabled.
B
So
this
is
another
topic
interesting
topic,
so
in
the
disconnected
cluster
telemetry
active
or
disabled.
So
I
don't
know
rough.
What?
What
do
you
did
in
this
case?.
D
That's
excellent
excellent
question:
that
of
the
box,
you
got
a
monitoring
setup.
You
can
avoid
that
you're
gonna,
be
there
all
the
time,
but
in
case
you
want
to
extend
the
telemetry.
D
You
can
always
do,
but
in
the
case
of
a
being
disconnected,
doesn't
make
much
sense
to
go
out
far
at
the
type
of
a
monitoring
that
we
do
in
exposed
systems.
The
basic
of
the
box
setup
allows
you
to
have
the
minimum
required
to
have
a
good
overview,
how
you
cluster,
how
your
resources
are
being
consumed
by
the
by
the
cluster,
but
you
are
able
to
to
even
reduce
that
in
case
you
don't
want
to
see
anything
but
the
the
out
of
the
box
stuck
how
this
setup
is
reasonable.
D
B
Thanks
ruff
and
hamid,
what's
up,
I
was
wondering
also
from
your
side.
If
you
know
what
is
the
telemetry
services,
if
you
think
it
was
good
for
your
use
case,
what
what
do
you
think
about.
B
Okay,
so
the
monitoring
internal
monitoring
was
enough
for
you,
or
you
have
also
you.
You
use
also
other
monitoring
tool
to
to
track
your
cluster.
C
A
Yes
and
natalie,
so
maybe
a
quick,
also
a
reminder
of
why
the
telemetry
service
has
been
yeah
at
open
shift.
So
it's
it's
basically
a
way
for
it
had
to
improve
our
support
relationship
with
our
customers,
but
in
a
proactive
manner.
A
So
what
we
ask
for
the
customers
is
to
agree
to
provide
some
non-sensitive
data
about
their
clusters,
about
its
health
and
because
we
collect
information
from
those
clusters
and,
of
course
it's
there's,
no
like
it's
mostly
anonymous
information
about
the
components
of
the
cluster
itself
and
not
anything
really
related
to
the
workloads
that
run
on
those
clusters.
A
We
know
if
it's
something
that
is
only
with
this
customer
or
if
it's
something
that
we
have
identified
with
many
other
customer
installations
and
that's
something
we
can
proactively
start
working
on
and
dispatch.
The
information
to
our
customers
that
something
like
a
new
fix
has
been
released
ahead
of
time.
So
it's
it's
really
to
help
being
proactive
in
the
way
that
we
handle
the
the
support
relationship.
B
Yeah
cool
thanks
jafar
for
this
explanation
for
having
some
context
around
the
telemetry.
That
was
a
good
question.
So
thanks.
I
think
it
was
the
unix
365.
I
made
that's
this
question.
So
if
you
have
any
question,
please
use
the
chat.
We
want
to
use
this
coffee
break,
to
have
a
coffee
break
with
all
you
sharing
all
the
experience
and
the
use
case
and
as
it
is
a
bi-weekly
show
if
you
have
any
any
idea,
please
reach
out
to
us
at
openshifttv,
or
we
will
share
also
our
twitter.
B
So
please
reach
out
to
us
to
propose
any
topic.
You
would
like
to
discuss
around
open
shift,
use
cases
in
this
coffee
break.
Let's
say
virtual
extended
coffee
break
in
in
mia
morning,
but
we
we
may
have
also
people
from
apac.
I
don't
think
us,
because
it's
very
early
there,
but
yeah.
E
B
D
I
I
will
show
you
guys
how
easy
and
straightforward
is
to
mirror
a
register
for
the
installation.
Here
we
have
a
brand
new
aws.
I
do
lazy
to
instant
with
a
with
a
rail
image.
E
B
Yeah
yeah,
you
can,
you
can
use
the
zoom
chat
and
I
will
I
will
share
to
to
all
our
channel.
D
D
Stacks,
okay,
everything's,
look
good!
We
go
to
our
version.
We
created
the
security
group
and
the
vpc.
Okay,
that's
fine,
and
now
we're
gonna
do
the
most
important
step
of
the
whole
installation.
That
is
the
mirror,
mirror
properly
the
our
internal
registry.
I
create
this.
The
the
the
script
that
hamid
was
mentioned
before.
C
B
Yeah
veeam
is
a
good
package
to
install.
I
think
you
had
it
already.
D
Yeah
perfect:
we
need
to
find
out
first
what
our
host
name
is,
as
the
host
name
is.
D
Of
course,
but
this
is
not
what
I
I
want
why
the
following,
I
configure
a
a
reliable,
a
host
name
for
let's
say
for
a
mimic
of
a
real
environment
that
was
then
there.
B
D
D
I'm
gonna,
I'm
gonna,
create
the
mirror
register
and
for
that
I'm
gonna
use
this
magic
script.
Here
it
looked
a
little
bit
long,
but
actually
it's
a
it's
a
it's
not
that
interesting.
D
What
it
does
is,
first,
it
install
the
the
package,
the
new
package
to
do
the
the
the
whole
magic
of
the
of
of
the
operating
system,
but
it
also
installed
the
historic
packages,
the
one
what
jafar
mentioned
before
the
oc
command
the
office
installed
later
for
the
installation,
fair
tools
like
potman,
scorpio,
hp
proxy.
In
case
we
will
use
the
the
the
the
proxy.
D
This
is
a
very
complete
script,
because
it
can.
It
brings
you
from
here
to
zero
in
respect
of
mirona,
a
registry.
Okay,.
D
0,
sorry,
thank
you.
That's
very
important
boy!
Thank
you!
Man,
okay,
however,
the
when
you
mirror
the
race,
you
need
a
fully
qualified
domain
name.
Yeah,
actually
specify
very
clearly
that
guys
don't
put
ips
because
then
you
will.
You
will
not
create
the
proper
certificate,
so
use
the
quickly
photo
menu
for
your
environment,
okay,
but.
B
That
is,
that
is
an
internal
hostname
in
the
aws
machine,
or
is
the
reverse.
B
D
Yeah
but
I
go,
I
create
a
record
in
in
a
private
song
and
that's
what
I
was
expecting,
but
anyway
we
can
also
use
this
one.
It's
not
it's
no
problem.
B
Okay,
as
best
practices,
it's
better
to
create
a
record
on
route
53
and
in
the
private
zone.
So
your
machine
have
this
hostname,
which
is
better
than
ip
whatever
and
more
mnemonic
names.
So
so
you
did
it
actually.
D
D
That's
what
I'm
talking
about.
Did
you
test
everything
and
then
at
the
and
the
last
moment
is
something
missing.
You
see
it
never
changed
for
demos
for
live
demos.
B
I
mean
the
important
that
worked
on
your
cluster
right.
If
that
doesn't
run
in
the
live
demo,
we
don't
care
yeah.
D
Okay,
I
I
don't
know
if
it's
very
relevant
to
in
this
short
time
to
go
through
every
single
part
of
this
script,
but
guys
you
can
go
through
it
and
then
you
have
any
questions
how
to
to
use
it.
I'm
more
than
happy
to
do
a
go
deep,
a
little
bit
on
that.
Okay,.
A
Keep
in
mind
that
we
are
here
in
case,
you
need
us
to
do
whatever
explanations,
while
you
are
doing
things.
Oh.
D
They're
very
good,
don't
hesitate.
D
This
is
quite
automated
and
I
think
amit
can
remember
that
I
saved
a
lot
of
headaches
in
a
given
moment
say
you
just
launch
the
script
and
you
can
see
a
subject,
but
you
need
to
actually
very
very
good
that
happened
because
it
means
this
part.
It
required
that
on
the
root
directory,
you
got
this
file.
I
want
to
place
that
off
camera,
because
the
secret
file.
B
Yeah,
so
let's
give
to
us
the
opportunity
to
to
discuss
again
a
little
bit
about
it.
There
are
kind
of
12
minutes
left,
so
I
would
like
to
know
from
mind,
while
from
hamid
and
brian,
so
how
do
you
think?
What
do
you
think
about
the
the
experience
you
had
with
this
disconnected
on
aws?
B
C
Yeah
for
us
this
was
the
first
open
shift
install
within
fight
serve,
and
it
was
very
important
for
us
to
get
this
right,
because
there
are
many
other
projects
which
would
be
needing
a
similar
kind
of
setup,
so
yeah.
So
now
we've
got
the
aws
setup
done.
C
We
might
be
looking
at
hybrid
cloud
with
azure
as
well,
so
yeah.
We
would
need
to
work
on
that
in
the
future
cool,
but.
B
Okay,
thanks
and
so
a
multi-cloud
strategy
always
disconnected
as
far
as
I
understand.
E
Cool
complete
this
hybrid
cloud,
an
on-prem
installation
is
going
to
happen
too.
You
will
have
the
the
complete
hybrid
picture,
yeah.
C
E
And
on
top
of
that,
yeah
saying
that
again
this
this,
I
will
say
this
connected
installation
is
quite
standard
and
we
are
taking
the
advantage
to
to
getting
the
the
same
procedure
to
install
the
software
on
top
of
of
of
openshift.
In
this
case,
in
this
case,
was
the
the
cloud
packs
on
ibm
and
and
basically
they
are.
They
are
following
the
same.
The
same
steps
duplicating
this
this
registry
and
continue
that
mean
that
this
is
a
industry
standard.
I
would
say.
B
Pool
yeah
that
that
is
a
can.
Yes,
it's
going
to
to
be
an
industry
standard.
That's
why
I
wanted
to
understand
a
little
bit
more
about
this
use
case,
so
this
disconnected
on
address
is
kind
of
a
template.
Let's
say
hey
this
works.
Let's,
let's
continue
do
this
and
let's
have
also
an
hybrid
cloud
strategy.
After
so
we
can
have
the
the
resilience
you
know
we
can
have
multiple
points
where
running
our
workloads,
that
is
cool
and
how
many
about
this
disconnected
cluster.
B
Can
you
say
it
runs
many
apps
or
it's
running,
and
only
some
poke
app
or
how
is
this
production.
C
Yeah
so
I
mean
it's
not
been
productionized
yet
because
we
are
still
in
process
of
evaluating
some
of
the
applications
that
that
are
being
onboarded,
so
ibm
and
brian
has
been
involved.
So
we
are
trying
to
onboard
some
of
the
ibm
products
onto
this
cluster,
so
we
are
still
in
process
of
testing
our
applications.
D
Okay,
actually
I've.
I
found
out
that
the
dead
it
was
retrieving
to
him
through
true.
It
was
resolving
two
different
hostname,
but
actually
this
is
what
I
was
looking.
B
B
So
this
is
that
you
are:
can
you
increase
a
little
bit
the
font,
because
I
think
it's
better
for
visibility,
so
you
are
using
dig
to
resolve
the
this
host
name,
which
is
the
bastionos.
The
bastion
is
the
host
used,
usually
for
the
installation.
It
can
contain
a
mirror
registry.
We
were
talking
about
before,
so
it's
kind
of
helper
node.
We
call
it
also
helper
or
jump
node
many
names.
B
For
this
note
I
like
helper
node
because
it
helps-
and
you
are
now
trying
to
resolve
the
the
host
name
for
for
this
node
right,
yeah.
D
I
actually
execute
this
command
before
and
actually
didn't
happen
to
me
before
that
it
was
result.
It
was
doing
a
reverse
to
this
internal
because
actually
I
did
they
activate
it.
So
I
don't
know
why
dublin
didn't
want
to
help
me
this
time,
but
this
is
the
most
important.
This
is
my
my
private
sound,
as
you
mentioned
before,
natalie
is
I
I
we
like
to
do
this
because
we
have
control
exactly
of
the
dns
resolving
and
anyway
it's
a
it's.
A
it's
resolving
properly
from
this
server.
D
B
So
rough
next
step
for
this
is
an
ansible
playbook
right.
Some
ansible
rolls.
D
Actually,
it
is
a
kiss
already,
but
this
mirror
script
is
a
is
a
it's
hot
silly.
It
was
a
update
last
night,
so
I
need
to.
I
need
to
really
translate
into
into
a
proper
playbook.
Okay,
we
carry
on
got
the
artifact,
then
interested
enough.
We're
gonna
go
to
create
our
of
the
preparation
for
the
for
our
internal
registry.
B
Hey
hamid
working
with
raf
is
great.
You
know
it
looks
like
everything
is
easy.
I
feel
I
can
install
disconnect
at
the
blind
now.
C
Yeah,
that's
true
the
the
script
that
rafa
is
running
now
we
have
used
I've
used
on
my
own
multiple
times
to
create
the
to
create
the
registry
and
it
works,
works
wonderfully
every
time.
B
Cool
so,
and
you
can
try
your
script
before
people
are
watching
in
the
in
in
the
streams
I
also
shared
in
the
chat,
the
link
of
the
open
shift,
the
tri
link.
If
you
want
to
try
open
shift
on
the
aws
or
public
cloud
or
any
other
installation,
you
want
to
just
try
it
out.
Please
go
to
that
link
and
and
start
trying
openshift,
and
you
can
use
this
script
with
this
repo.
B
If
you,
if
you
plan,
if
you
want
to
try
disconnected
installation,
it's
an
alpha
that
can
make
your
life
easier
in
this
disconnected
scenario.
D
D
We
actually
do
the
mirroring
for
all
the
a
red
hat
community
and
community
operators,
so
we
don't
want
to
do
that
now
because
it's
necessary
for
the
installation
and
also
because
it's
quite
big
we're
talking
about
more
than
120
gigabyte
of
beautiful
operators
not
only
placed
by
red
hat
but
also
by
the
community,
but
I
just
want
to
let
you
know
that
in
case
you
want
to
mirror
also
the
the
the
operators
of
the
community.
You
can
do
that
here.
It
repeats
the
script.
D
I
remember
I
remember
when
I
was
really:
oh
man.
Sorry
sorry,
you
know
the
speed.
The
spinning
with
pfizer
was
really
amazing.
Those
guys
are
very
professional
and
ibm
has
also
then
doing
their
stuff
with
the
the
integration
of
a
the
cloud
pack.
It
really
was
a
super
interesting
project
because
we
could
see
the
the
power
of
a
cloud
pack
serving
the
customer
running
on
openshift
doing
the
of
what
they
they
needed
was
really
really
good.
It's
a
great
team
and
and
the
and
the
collaboration
is
still
still
growing
it
just
start.
D
C
B
Yeah,
that's
wonderful!
That's
wonderful!
The
collaboration
is
key,
so
thank
you
for
sharing
this
use
case
for
with
us
today
on
openshift
tv
and
having
this
collaboration
is
king
in
this
as
final
thoughts,
because
we
are
going
to
the
end
of
the
show,
and
so
I
would
like
to
do
the
closing
rough.
If
you
to
show
something,
I
don't
know
if
we
have
the
time.
D
The
the
I
had
to
have
hopes
that
we,
it
will
finish
and
we
will
finish
and
then
we
could
see.
We
can
query
them
the
registry,
but
if
we
don't
have
time
yeah,
I
you
know
what
I'm
gonna
do.
I'm
gonna,
I'm
gonna
run
this.
I'm
gonna
load
the
video,
so
you
can
see
the
whole
thing
in
this
repo
later.
D
B
Thank
you.
Thank
you
very
much,
so
the
topology
you
shared
that
is
in
the
repo
is
very
interesting.
It's
very
cool,
and
also
this
collaboration.
You
were
talking
about
with
amit
brian.
That
is
fantastic.
This
is
really
the
spirit
of
how
to
build
great
things.
Thanks
for
sharing
this
use
case
today,
it
was
very
interesting
if
you
want
to
reach
out
to
raf
to
hamid
to
brian.
B
You
can
ping
us,
and
we
will
also
folks
if
you
want
to
share
your
twitter
handle
in
the
chat,
but
we
are
always
available
on
twitter
at
openshifttv,
and
if
you
want
to
get
more
about
this,
please
reach
out
to
us.
We
will
keep
you
in
link
with
meet
brian
and
raf
in
the
wild.
Thank
you
very
much
for
this
session.
Ralph.
You
can
stop
sharing
the
screen
now,
so
we
end
this
session.
B
I
only
would
like
to
thank
you
all,
and
I
would
like
to
thank
chris
short
specifically
for
this
big
big
help
on
setting
up
this
first
emea
time
zone
overshift
tv
show
thank
you
chris,
for
having
set
it
up
all
for
us.
So
thank
you
very
much.
I
appreciate
it.
I
appreciate
that.
D
Do
we
have
one
more
minute
to
show
the
how
the
mirror
finish?
Oh.
D
B
D
A
Yeah,
so
just
before
we
we
we
close
up.
This
is
a
show
that
we
are
hosting
for
the
emea
time
zone,
but
I
linked
a
calendar
for
the
main
openshift
tv
twitch
shows,
which
have
also
a
lot
of
great
content.
A
It's
just
maybe
a
bit
later
in
the
day
for
for
us,
but
yeah
feel
free
to
to
check
it,
and
the
good
thing
is
that
it's
also
recorded.
So
you
can.
D
B
It
live.
This
was
a
totally
live
code,
live
hacking.
Thank
you.
Thank
you
very
much,
so
I
want
to
thank
hamid
brian
wrath
for
this
session
today.
Thanks
for
having
joined
this
first
episode
of
our
openshift
coffee
break,
show
just
a
quick
reminder.
As
jafar
was
saying,
if
you
we
on
openshifttv,
the
next
show
would
be
at
9
tea
time
the
level
up
hour
today,
but
and-
and
we
will
come
back
on
that
on
to
to
in
into.
B
In
two
weeks
on
the
17th
and
the
topic
will
be
inner
and
outer
loop
for
java
developers
on
openshift.
So
if
you
like
to
to
watch
all
the
shows
go
into
the
calendar,
I
think
jafar
shared
it
in
the
chat
in
the
wild
folks.
Thank
you
for
attending
it
have
another
coffee.
It
was
a
great
coffee
break
and
talk
to
you
soon.