►
From YouTube: OpenShift Coffee Break: A look back at the history
Description
Back to the future? Join us with Alessandro Arrichiello as we walk through the story of OpenShift 3 and how it evolved from the beginning! Get your espresso ready for the EMEA OpenShift Coffee Break together with Natale Vinto, Tero Ahonen, and Jaafar Chraibi.
A
Good
morning,
good
morning,
everyone
welcome
back
to
the
openshift
tv
coffee
break
every
bi-weekly,
every
wednesday
10
a.m.
A
B
Yeah
sure
so
hi
everyone.
So
thanks
again
for
joining
us,
I'm
jafar
shrivi
and
I
work
as
a
tech,
marketing
manager
for
openshift
and
prior
to
that.
I
used
to
play
for
several
years
with
the
openshift
3
and
even
open52
before
so.
This
is
going
to
bring
a
lot
of
memories
from
the
past.
I
think
terrell
thanks.
C
Good
morning,
everyone
and
I'm
working
our
specialist
team
same
I've
been
working
a
lot
with
opposite
three,
and
I
I
kind
of
hope
that
this
day
never
comes
that.
I
have
to
talk
about
openshift3
again,
but
at
the
time
it
was
a
good
product
and
yeah.
There
are
differences
what
it
was
and
what
opens
it
is
now
have
a
nice
show.
D
E
D
Okay,
so
as
terror
and
natalie
and
jafar
and
the
others
actually
mention
it,
we
will
talk
about
openshift3.
We
we
talked
about.
You
know
talking
about
again
openshift3,
because
sometimes
it's
good
to
look,
look
back
and
take
also
a
look
to
what
are
the
differences
between
this
product
that
actually
is
five.
D
Six
years
old,
almost
me
and
mattel
started
working
on
it,
mostly
on
2015
2016,
maybe
for
for
a
customer
here
in
milan,
a
broadband
company,
but
you
know,
looking
back,
is
also
give
you
the
chances
to
to
see
what
kind
of
features
actually
was
there
five
six
years
ago,
comparing
also
to
the
features
that
we
have
today,
we
will
see,
I
prepared
a
small
demo.
We
will
have
a
small
little
environment,
running
openshift
free
on
my
laptop,
but.
A
C
D
D
and,
as
I
said,
looking
back
to
five
six
years
ago,
we
me
and
matteo
were
part
of
radat
also,
maybe
also
some
of
the
the
participants
of
this
school,
but
we
are
in
different
role.
We,
we
are
part
of
the
global
professional
services,
s
cloud
consultant,
and
I
mainly
I
recently
joined
it
five
six
years
ago,.
D
And
actually,
you
know,
as
I
said,
we
work
we
work
usually
on
customer
using
rel
on
on
the
linux
box.
There
was
also
some
starting
project
about
openstack,
for
example,
but
openshift
is
was
pretty
new
there.
There
were
some
installation
about
version
2,
but
version
3
is
really
recent
at
that
time.
So
we
started
working
on,
as
I
said,
on
this
broadband
company.
D
That
should
start
the
streaming
services,
because
you
know
instead
of
the
fixed
line,
the
cable
they
want
to
adopt
this,
this
new
services
on
on
the
streaming
side
of
the
of
the
tv,
and
they
need
a
very
fast
platform
that
could
enable
the
developers
to
put
in
production
some
in
a
very
fast
some
product,
some
application
in
a
very
fast
way.
That's
why
they
started
looking
at
the
container.
The
first
started
experimenting
with
docker
at
that
time
with
containers
and
then
ended
up
to
to
start
testing.
D
Also
openshift
version
three
and
me
and
matteo
helped
this
customer
installing
the
this
version.
I,
if
I'm
not
wrong,
it
was
the
version
3.0
or
3.1.
That
was
just
released
or
again
we
had
this
small
upgrade
from
version
3
0
to
3.1,
but
it
was
fun
because
actually
we
we
had,
we
had
to
look
at
the
selection
part
we
we
had.
You
know
we
had
experience
with
docker
almost
at
that
time.
D
Five
six
years
ago,
the
the
the
more
smart
systems
means
his
admin
have
some
experience
on
on
containers,
but
we
had
no
clue
on
kubernetes
on
on
this
product.
You
know
you're
right.
E
E
So,
with
this
new
version,
we
we
did
a
a
big
change
and
if
you
remember
it
was
really
challenging
to
understand
all
this
new
perspective.
How
this
technology
brings
all
the
things
together
in
order
to
put
services
online.
A
Going
back
into
the
history,
I
remember:
openshift
2
was
using
a
cartridge
concept,
so
that
was
its
own
implementation
of
a
platform
as
a
service
right
and
then
it
comes
kubernetes
and
then
matteo
and
alex
sandro
were
the
one
of
the
first
one
going
into
production.
With
that,
it's
a
very
cool
story.
I
look
forward
to
to
hear
from
you.
D
A
B
Yeah
one
of
our
biggest
offensive
customers
in
france
actually
started
on
on
openshift
2
and
they
kept
it
for
a
while,
even
after
openshift
3
was
released,
so
it
was
yeah
it
it.
It
was
also,
although
a
completely
different
product,
I
mean
it
was
delivering
the
value
of
this
notion
of
past
and
and
these
things
so.
D
Let
me
share
my
screen,
so
we
come
to
the
to
the
demo.
Actually
I
don't
want
to
give
you
any
any
details
on
this
web
console,
but,
first
of
all
start
with
the
installation.
As
I
said,
I
prepared
on
my
laptop
a
small
installation
of
openshift3.
Actually
maybe
you
you
lost
my
video.
Let
me
enable
it.
Okay,
I
prepared,
on
my
laptop
the
the
openshift3
installation,
basically
on
two
virtual
machines.
D
I
created
this
two
virtual
machine
with
background,
so
it
would
be
easy
to
replicate
and
to
let's
say,
re-enable
or
reschedule
or
restart
the
service
on
this
needed,
but
just
to
give
you
an
example
for
install
I'm,
I
started
with
the
two
virtual
machine
running
route.
Actually,
I
started
also
with
a
pretty
recent
rail
version.
Rail
7.6
actually-
and
I
have
to
say
that
the
installation
went
really
smooth.
D
But
apart
from
some
errors
that
I
I'll
tell
you
in
a
moment,
but
as
I
said,
I
just
go
through
the
documentation.
D
I
you
are
seeing
the
the
documentation
for
version
3.1,
but
in
3.0
it
was
the
same,
and
I
followed
all
the
the
the
instruction
reported,
for
example,
registering
the
the
virtual
machine
to
redot
portal.
Attaching
the
right
pool
id
enabling
the
right.
D
Channels,
for
example,
and
installing
of
force
all
the
prerequisites
for
running
the
ansible,
the
ansible
playbook,
because
I
don't
know
if
there
is
someone
connected
that
only
know,
openshift
4
on
the
audience
I
mean,
but
in
version
3
we
have
hansible
handling
the
installation
part
because
we
have
standard
rail
and
then
we
need
some
some
something
like
an
automation
for
installing
the
all
the
packages.
I.
D
Yeah,
in
fact,
in
fact,
in
fact,
in
fact
before
we
had
a
lot
of
experience
with
puppet,
I
mean
also
also
on
myself.
I
I
have.
I
had
a
lot
of
experience
with
puppet,
but
I'm
pretty
new
with
ansible,
so
it
was
really
confusing
for
me
debugging
this
kind
of
installer,
because
you
know
when
you
hit
an
error,
you
should
then
we
work
as
a
consultant.
So
we,
the
customer,
expect
we
solved
the
issue
that
we
encountered,
so
it
was
really
hard
to
get
into
the
into
the
into
the
product.
D
But
at
the
end,
we
we
did
it
and,
as
I
said,
we
all
the
prerequisite
where
to
where
to
actually
have
in
place
all
the
requirements
for
installing
the
product
and,
as
you
can
see
at
the
end,
in
the
final
part,
we
also
installed
docker
and
of
course
we
then
this.
I
remember
pretty
well
that
we
had
it
later.
This
kind
of
warning,
because
you
know
docker,
keep
updating
and
then,
if
you
use
a
previous
version
of
shift,
you
can
hit,
of
course,
some
some
issues.
E
For
instance,
looking
at
the
chain,
the
changes
that
we
performed
with
this
product
we
no
longer
about
the
engine,
no,
is
that
right,
yeah
yeah.
B
Yeah
and
I
I
think
it
would
be
interesting
at
some
point
to
to
to
tell
the
story
about
how
things
evolved
like
docker
at
the
time,
was
a
pretty
good
breakthrough,
like
the
wrapping.
B
The
containers
technology
in
a
very
user-friendly
experience
was
was
great,
but
things
started
to
really
like
you
know:
everybody
getting
enthusiastic
about
it
and
docker
starting
to
to
push
releases
almost
weekly
or
bi-weekly,
or
I
don't
remember
at
what
pace
and
then
start
vendors
who
are
relying
on
docker
wanted
to
have
maybe
some
more
enterprise
grade
support
ability
so
yeah.
If
we
can
maybe
talk
about
that
and
the
way
cryo
was
created-
and
these
things
might
be,
might
be
interesting.
D
Yeah,
as
I
said,
looking
back
to
the
at
that
time-
and
I
switched
to
let's
say
to
my
terminal:
actually,
I'm
now
on
the
master
but
maybe
start
on
on
the
on
the
initial
part.
We
I
had,
as
I
said,
two
virtual
machine
running,
one
is
the
master
for
open
shift
and,
and
the
other
is
the
worker.
If
I
jump
on
the
on
the
master
and
let's
say,
look
at
the
services
installer
I
can.
I
can
see
that
there
is
a
docker
demon
running
as
the
documentation
say
the
and
the
it's
right,
the
docker.
D
At
that
time
it
was.
It
was
a
really
a
nice
technology
for
it
and
also
created
a
lot
of
let's
say
hype
around
the
containers
world.
But
I
remember
pretty
well
that
one
of
the
common
issues
that
we
had
with
the
with
the
the
first
customer
is
that
maybe
the
docker
storage
got
full.
Maybe
the
docker
storage
fails
and
then
you
have
from
from
aside
the.
E
E
At
the
time
there
weren't
things
like
garbage
collecting
for
the
images
pruning,
so
this
bathroom
didn't
include
that
feature,
so
the
problem
that
alessandra
described
happened
and
that's
why
the
products
evolved.
You
know
right
now.
D
And
let
me
say
also
having
two
demons
working
in
parallel:
each
other.
It
could
be
really
a
mess,
because
at
the
end
you
will
have-
and
at
the
time
there
was
the
atomic
overshift
node
the
services
running
for
openshift
on
the
on
the
worker
nodes.
And
then
you
have
the
docker
demon
running
and
if
the
docker
demon
fail
failed
for,
for
some
reason,.
E
D
The
atomic
bishop
note
keeps
running
and
keep
trying
to
contact
the
docker
demon,
and
it
was
really
really
also
difficult
to
debug,
because
you
know
when,
when
we
started,
we
all
praying
pretty
know
the
the
docker
and
the
docker
demon
how
works,
but
the
kubernetes
stuff
is
pretty
new.
So
we
should
also
understand
how
they
communicate
the
services
internally
communicate
and
what
are
the
the
issue,
but
for
not
giving
for
not
entering
in
too
much
details
about
docker.
D
I
want
to
show
you
the
the
enhanceable
host
file
because,
as
I
said,
the
installation
part
at
the
end
requires
you
to
fill
and
edit
an
ansible
cost,
providing
all
the
details
that
you
need
for.
Let's
say
your
your
installation
and,
as
you
see
in
the
in
my
in
my
terminal,
we
define
two
two
categories:
one
one
masters,
one
for
masters
and
one
for
nodes
and
then
define
some
ansible.
D
Let's
say
variables
that
contains
the
ssh
user
to
use
for
installing
all
the
stuff
and
the
software,
if
the
if
ansible,
should
use,
sudo
and
again
the
type
of
deployment,
because
at
the
time
we
have,
we
also
released
it
in
open
source.
The
okd
at
the
time
it
was
named,
origin,
the
project,
origin
and
finally,
some
some
configuration
about
that.
E
I
mean
I
mean
this.
This
actually
solder
was
really
really
flexible,
and
this
is
this
was,
in
my
opinion,
one
of
the
best
features
that
we
had
at
the
time.
However,
during
the
life
cycle
of
our
cluster,
this
approach
might
lead
in
what
can
be
called
a
configuration
rate,
and
if
I
look
at
the
comparison
with
what
we
are
doing
now
with
version
four,
we
completely
engineered
the
installation
part,
and
this
will
allow
us
to
have
the
a
better
handling
of
the
world
life
cycle
cluster.
E
So
maybe
what
you're
seeing
here,
what
is
showing
us
and
what
we
were
able
to
do
with
version
3
was
a
really
extendable
and
flexible
in
terms
of
how
we
can
deploy
the
architecture,
but
still
the
problem
with
the
lifestyle
management
of
the
class.
That
is
something
that
we,
I
think
we
are
addressing
better
now
sorry
for
having
you.
C
Okay,
actually
about
the
inventory
file
that
mentioned
that
I
remember
when
there
was
always
a
new
release.
Once
we
got
an
inventory
file
that
works
with
that
release,
that
was
like
a
ground
tool.
We
never
lose
it.
We
saved
it
and
send
it
to
everyone
so
that
we
have
a
working
inventory
file.
That
was
always
nice
because
there
was
there
were
always
something
that
changed
in
the
inventory
file
in
the
different
versions
and-
and
there
was
a
lot
of
stuff
that
was
not
documented,
but
there
was
variables
for
those.
C
So
it
was
its
own
search
and
research
and
development
to
go
through
the
code
and
check
that.
Can
I
modify
this
value
or
is
there
some
variable
for
this?
So
it
was.
It
was
good,
but
it
had
its
problems.
B
Yeah
yeah
right
right
in
the
inventory
file
became
a
skill
set
in
its
own.
Like
yeah
you,
you
had
to
become
a
master
of
inventory
and
then
your
colleagues
ask
you.
As
the
you
know,
the
the
master
guru.
C
B
B
E
I
think
that
is
a
major
change.
It
is.
It
is
a
a
way
forward
in
a
real
enhancement
on
how
to
handle
the
complexity
that,
with
the
growth
of
the
technologies
included.
E
Nowadays,
the
openshift
handles
many
many
themes
in
confront
of
what
was
able
to
do
in
version
three,
especially
in
version
3.0,
so
this
complexity
has
to
be
handled
and
the
operators
and
the
way
we
installed
this
today
is
a
great
announcement
on
that.
B
A
Yeah
sometimes
we
hear
some
noise
from
the
space.
Like
I
don't
know.
B
D
Are
pretty
huge,
so
many
they
are
the
fun
of
my
laptop
but
anyway
just
for
giving
you
an
example.
I
at
that
time.
As
I
said,
we
are
exp.
We
started
experimenting
with
this
product
with
with
this
customer
in
here
in
italy,
and
we
started
also
editing
the,
for
example,
the
configuration
file
directly
on
the
openshift
configuration
directory,
for
example,
on
the
on
the
master
node.
D
With
that
there
is
this
config
file
called
the
master
config.jambo
that
contains
all
the
stuff
needed
to
the
openshift
master
services.
To
start
and
again,
when
you
configure
something
at
the
time
enhanceable
I
mean
in
the
ansible
inventory,
then
it
reflected
in
this
file,
but
the
the
the
fun
part
is-
and
here
is
here's
the
fun
part
when
you
edit,
something
because,
for
example,
I
added
this
matrix
public
url.
D
That
was
the
first
version
or
ocular
magic
service,
and
I
added
it
manually,
then,
if
you
for
let's
say
if
you
forgot
to
update
your
ansible
file
as
well,
just
overwrite
the
the
configuration
file
for
you,
so
you
lose
all
the
edit
you
do.
You
did
manually,
you
know,
and
at
that
time
it
was
pretty
frequent
to
work
directly
on
the
machines
with
the
configuration
file
also
for
troubleshooting
and,
as
I
said,
for
testing.
D
On
the
other
hand,
another
issue
that
I
hit-
and
maybe
here
I
can
show
you
a
diff
between
the
ansible
host
that
we
that
I
prepared
for
for
the
open
shift.
D
You
will
see
soon
that
there
is
a
change
in
the
in
the
variables
and
at
that
time,
and
also
this
time
I
didn't
read
the
release
notes.
I
didn't
read
the
the
changelog
between
the
two
version,
so
I
ended
up
with
I
started
working
with
openshift
3.0
and
then
realized
that
some
of
the
shiny
feature
of
openshift,
like
the
the
terminal,
the
logs,
the
metrics,
were
not
present
at
that
time
in
3.0.
D
Unfortunately
I
didn't
read
the
changelog,
and
so
I
didn't
realize
that
we
changed
the
value
for
this
deployment
type,
and
so
the
installation
keeps
failing
and
it
keeps
failing
because,
as
you
can
see,
we
we
first
had
this
these
two
variables,
product
type
and
deployment
type
in
version
3.0
and
then
change
it
to
3.1
in
just
a
deployment
type
with
a
joint
of
openshift
and
enterprise.
D
You
know
and
this
of
course
it
was
actually
a
lot
of
hours
of
work
for
me
to
understand
what
is
the
issue
because,
as
I
said,
I
didn't
read
the
instruction.
This
is
my
fault.
I
didn't
read
the
documentation,
but
it's
pretty
common,
also
with
with
this
ansible
playbook,
to
also
forget
about
some
stuff
forget
about
to
read
or
miss
some
pieces
in
the
documentation
and
then.
E
A
Yeah,
that's
why
you
know
that
if
you
recall
that's
why
terror
started
and
we
as
a
tiger
team,
we
started
the
stc
project
just
to
collect
all
the
hints.
Like
alessandro
say
no,
this
variable.
You
have
to
recall
that
variable
that
so
we
build
up
a
kind
of
validate
or
prep
script
and
simple
script
to
prepare
the
right
ansible
file.
So
actually
I
can
share
in
the
chat
because
it
is
still
working
for
openshift4,
but
you,
if
you
go
back
into
the
history
that
works
also
for
openshift3.
D
C
Yeah,
but
back
to
the
ansible
part
that
it
is
powerful
and
if
there
was
a
bug
you
could
just
change
the
ansible
files,
the
playbooks
in
the
host
or,
if
you
need
to
debug,
you
could
run
ansible
to
modify
the
host
or
do
docker
pull
in
all
the
all
the
hosts
to
test
that
docker
is
working,
so
it
was
really
powerful,
powerful.
C
But
still
there
that
alessandro
said
that
if
you
change
something
in
the
master,
you
need
to
change
same
same
indentable
host.
That
is
totally
same
now,
but
in
different
layer.
Now
you
do
heat
ups,
so
you
don't
go
there
to
change
your
deployments.
You
have
to
go
to
the
kit
to
the
source
of
truth
and
modify
it
there.
So
it's
the
same
thing,
but
just
a
couple
of
layers
up.
E
B
Yeah-
and
I
remember
also
one
of
the
the
the
big
things
that
we
had
was
like
if
I
wanted
to
install
something
post
installation,
how
can
I
trigger
just
this
and
not
rerun
like
the
whole
ansible
playbook,
because
I
forgot
to
put
in
some
variable
for
for
the
matrix
or
whatever,
and
I
know
that
we
started
dissecting,
the
the
installation
playbooks
into
more
granular
things,
it's
something
that
happened
a
bit
later,
so
you
had
playbooks
that
you
could
run
just
to
configure
logging
or
playbook
to
configure
just
the
the
metrics
etc,
but
still
you
had
to
like
run
the
table
and
you
had
to
run
them
in
a
specific
order
and
now
with
operators,
you
don't
care
because
the
operators
are
going
to
continuously
look
for
any
changes
you
make
in
the
gate,
repo
as
zero
said,
and
they
happen
in
the
order
they
happen
and
there's
no
like
do
this
playbook,
oh
no.
E
Absolutely
absolutely
at
the
time
potency
of
the
of
the
playbook
of
the
interstellar
book
was
a
thing
and
even
I
think,
from
an
engineering
perspective
from
the
work
that
our
engineering
team
loves
it.
It
was
really
hard
to
maintain
the
impotency
for
all
this
kind
of
automation.
Now,
right
now,
as
you
say
now,
as
you
said,
with
overlaid,
it
is
much
easier
with
already
the
end,
of
course,
with
core
os,
which
is
designed
to
be
important
as
well
immutable
as
well.
B
E
C
B
Think
much
streamlined
experience
exactly.
E
And,
and
it
would
be
a
little
capable
for
the
the
the
logic
that
enable
you
to
update
a
complex
piece
of
software,
which
is
made
by
many
components
in
the
past
you
needed
to,
if
you
did
an
update
for
elasticsearch.
This
is
an
instant,
an
example.
You
maybe
change
the
back-end
system
for
that
thing,
that
this
would
trigger
some
stuff
that
has
to
be
performed
manually.
Now,
if
everything
is
in
the
operator
and
is
handled
by
that,
so.
E
A
Yeah
yeah,
I
agree,
I
might
also
alessandro
mentioned
it,
so
we
chorus
the
the
risk
of
compromising
the
operating
system
really
reduce
it.
I
remember,
since
we
are
sharing
story
here,
I
want
to
share
you
one
of
the
story,
but
I
was
installing
openshift3,
and
so
what
happened
is
that
some,
I
think
the
customer
locked,
the
etc
result
conf,
so
that
file
was
locked
because
the
security
policy
was
to
lock
that
file,
but
the
installation
wasn't
working
after
two
hours
of
the
bug.
A
We
understood
that
was
the
problem
and
also
ansible
sometimes
was
failing,
because
the
operating
system
was
modified
with
core
os.
This
risk
of
changes,
not
not
not
expected
changes,
are
reduced
or
totally
eliminated.
So
a
big
advantage
is
not
only
the
operator
approach,
but
also
the
operating
system.
So
railcar
os
is
a
big
help
on
reducing
the
the
surface
of
attack,
but
also
the
risk
of
a
changes.
Our
error
on
the
state.
D
I
completely
agree,
and
if
you
remember
there
was
a
atomic
host
at
that
time.
That's
trying
to
to
that
to
do
that
job
you
know,
and
also
on
the
operator
side.
We
don't.
We
didn't
had
have
the
the
operator,
but
we
tried
with
the
ansible
playbook
bundle.
For
example,
there
was
this
ansible
containers
running
some
playbook
inside
the
container
inside
the
openshift.
So
there
was
a
lot
of
stuff
that
and
that
of
course
evolved
in
in
what
we
are
seeing
now
in
openshift4.
C
And
operators
are
supported
actually
after
3.9,
so
you
don't
use
operators,
but
adding
to
the
another
battle
story
is
that
at
the
time
there
were
some
companies
that
were
actually
using
automation
and
it
was
fun
to
try
to
install
openshift
and
then
you
had
a
short
stack
or
puppet
have
a
race
condition
because
they
tried
to
change
the
settings
back.
The
answer
to
change,
so
it
was
constant
change
and
it
just
never
worked
and
then.
E
C
Yeah
we
automatically
disable
ip
before
forwarding
or
modify
the
hcd
host,
so
everything
was
like
okay,
and
now
you
tell
me,
I've
been
battling
five
days
to
get
it
running.
E
If
you
think
again,
what
engineering
our
engineering
team
can
do
can
do
now
with
that
technology
is
about
bringing
and
putting
place
an
entire
continuous
integration
flow
to
test
each
new
releases
and
its
component,
while
at
the
time
it
wasn't
possible
to.
E
Let
me
say
that
way,
to
try
every
single
specific
configuration
that
a
customer
would
put
in
place
right
now.
What
one
of
the
benefits
is
make
sure
that
what
come
out
from
our
engineering
from
our
view
from
our
product
technology
structure,
is
something
that
is
really
end-to-end
testing.
C
Can
you
run
this
version
check?
What
is
the
coupon
underneath.
D
Yeah
we
have
openshift
version
3.1.1
and
kubernetes
version
1.1
at
that
time,
but.
D
A
D
D
Had
a
point-
and
you
aren't
yeah
you're,
anticipating
a
thing
that
I
want
to
show
because
I've
I
managed
to
find
the
the
documentation
running
through
the
web
archive
and
pointing
to
the
2016.
You
know
so
this
was
the
yeah
the
getting
started
page
of
kubernetes,
and
this
this
was
the
documentation
at
the
time.
So
it's
very
simple
and
basic
here,
for
example,
I
also
managed
to
find
yeah
the
the
definition
that
we
had
at
the
time.
There
is
a
part
there
is
service,
endpoint,
node
event
limit
range
secret.
D
D
E
B
E
Now,
ingressing
kubernetes
at
the
time
was
already
in
place
in
openshift,
with
the
concept
of
voltage
so
and
and
also
the
deployment
config,
which
yeah
are
now
in
some
way
represented
by
the
deployment
api
in
kubernetes
there
weren't
at
the
time
on
super
news.
A
B
And
and
if
we
think
we
are
speaking
about
so
the
the
history
of
openshift,
but
if
we
speak
about
the
history
of
kubernetes
also,
I
think
those
just
those
two
things
are
one
of
the
main
contributions.
I
would
say
that
trade
had
made
at
the
time
from
openshift
back
to
upstream
kubernetes,
because
you
know
we
were
committed
not
only
to
make
a
great
product,
but
also
to
contribute
to
making
the
upstream
projects
better
and
everybody
now
using
deployments.
B
C
Good
example,
since
it
was,
it,
was
in
the
open
shift,
it
wasn't
in
the
kubernetes
and
then
in
that
ray
has
contributed
a
lot
to
the
robust
access
controller
in
kubernetes
and
at
the
time
I
can't
remember
the
versions
they
were
robust
access,
controlling
kubernetes
and
different
role-based
access
control
in
openshift.
C
D
Yeah,
but
also
one
one
other
cool
things
is
that
openshift
was
born
with
developers
in
mind
because,
apart
from
the
you
know,
the
multi-tenancy
and,
as
you
can
see,
I've
just
logged
in
with
my
user
alex
define
it
in
the
ht
password
d
file.
D
The
first
thing
that
it
lets
you
do
is
to
create
a
project
that
was
something
is
like
a
an
extension
of
the
namespace,
and
if
I
create
my
my
mysql
project,
for
example-
and
I
hit
create
it
present
me,
the
classical
interface,
where
you
have
this
full
list
of
templates,
where
you
can
start.
Actually
you
you
have
not
to
let's
say,
build
your
docker
container
push
somewhere,
then
instruct
kubernetes
downloading,
pulling
down
the
the
the
containers
and
then
finally
run.
D
You
have
a
full
list
of
templates
at
that
time
and
we
are
still
in
version
3.1.
Just
for
saying
you
know,
and
so
we
had
the.
Of
course
we
started
also
our
adventure
with
jenkins
and
the
jenkins
integration
that
time
there
is
no
fancy
interface
integrated
in
openshift.
You
can
deploy,
for
example,
hankins,
and
there
is
a
bunch
of
pre-default
stuff
you
can
hit
create
and
let
openshift
spawn
a
new
container
for
you,
for
example.
D
So
it
was
really
straightforward,
easy
to
consume
containers
also
to
consume
containers
for
someone
that
did
didn't
know
so
well.
What
is
what
was
a
container?
You
know,
because
at
that
time,
if
we
talk
again
of
this
first
customer
that
we
had
in
milan
adopting
openshift
me,
mateo
and
other
colleagues
federigo,
for
example,
had
to
work
a
lot
to
like
a
devops.
You
know
you.
We
have
to
listen
to
the
complaints
coming
from
the
developers
listening
to
the
complaints
coming.
D
E
So
so
I
think
that
one
point
that
is
I'll
try
to
patch
one
one
important
point
here
I
mean
since
day
zero.
We
had
a
workflow
that
allowed
our
developers
to
strip
forward
using
this
technology
among
the
things
that
we
bring
in
this
solution,
the
concept
of
building
allowing
the
the
openshift
in
building
your
own
application,
starting
just
with
a.
C
D
C
D
Yeah
and
as
you
can
see
just
clicking
on
the
on
some
web
interface,
you
get
your
containers
up
and
running
and
there
is
also
a
route
created
for
you.
So,
starting
from
this
concept
also
of
template,
you
know
you
have
all
the
the
stuff
needed
to
put
your
containers
up
and
running
and
also
to
access
it
from
the
outside
of
the
cluster
and
so
again
just
clicking
the
route.
D
It
takes
me
to
the
jenkins
interface,
for
example.
So
it's
really
really
powerful
for
me,
this
kind
of
interface
and
this
kind
of
also
user
experience
at
the
time,
and
we,
if
we
look
at
the
left
sidebar,
we
had
a
ready
the
builds
concept,
so
you
can
create
your
build
and
define,
for
example,
also
a
docker
file
to
build
on
your
on
your
environment.
We
have
the
concept
of,
of
course,
of
pods.
We
can
explore
the
running
parts
and
starting
from
russian
3.1.
D
This
is
why
I
know
I
didn't
started
with
version
3.0,
at
least
for
showing
something
on
the
web
interface.
We
have
a
very
nice
recap
of
the
running
container
of
the
running
pod,
the
ip
address.
This
was
really
fun
at
that
time,
to
explain
to
the
customer
and
to
the
users
of
openshift
the
overlay
network.
The
fact
that
at
that
time
we
had,
we
had
also
docker
container
exposing
an
ip
locally
to
the
nodes.
D
Then
we
had
this
overlay
network
on
top
of
the
openshift
cluster,
and
then
we
had
this
ingress
controller
flowing
the
traffic
inside
your
cluster
through
the
to
the
pod,
and
I
think
that
mateo
could
could
explain
some
fun
fact,
because
you
know
handling
this
kind
of
architecture
in
a
in
an
old
view
in
an
old
architecture
type
of
the
customer.
When
we
had,
we
usually
had
the
web
interface
the
the
front
end.
Then
you
have
the
backhand,
and
then
you
have
the
db.
D
The
classical
three
three
tire
layers,
including
this
open
shift
that
actually
could
be
a
front
end,
but
could
also
expose
a
backhand
or
could
also
expose
a
database.
It
could
be
really
difficult.
Also
in
terms
of
networking.
You
know
so.
E
Basically,
we
were
working
as
that
customer
we
were
referring
to
and
they
they
needed
to
reach
a
backend
database
from
a
pod
that
was
placed
in
the
openshift
platform
that
we
put
in
place
for
them,
and
the
fact
is
that
at
the
time
there
weren't
the
concept
of
address
controller,
so
you
weren't
able
to
decide
within
kubernetes
how
to
handle
the
traffic
outside
kubernetes
so
and
outside
overships.
Of
course,
at
the
time,
the
only
thing
that
you
can
do
was.
E
Yeah
you're
right
you're
right
that
was
possible,
but
what
happens
is
that
you
need
to
make
sure
that
the
pod
that
that
node,
that
runs
your
container,
has
a
routing
table
able
to
reach
that
back
end.
So,
basically,
what
happens
is
that
your
thoughts
consume
the
rooting
table
of
the
underlying
name.
So
we
did
a
trick.
We
did
a
hack
with
traffic
control
and
source-based
routing
with
tp
tables
again,
but
the
customer.
E
I
think
that
the
customer
still
didn't
understand
how
that
worked,
because
we
put
the
hands
in
part
of
the
operating
system
that
were
quite
tricky.
Actually
so
alexandria
is
referring
to
that
because
he
blamed
me
because
for
what
we
did
because
it
was,
but
no
one
there
was
able
to
understand
how
and
why,
at
the
time.
C
C
B
Yeah
and
thinking
about
networking,
I
think
it's
also
one
of
the
areas
where,
where
they
had
there,
where
many
evolutions
happened
over
time,
so
as
as
you
said,
first
thing
is
to
deploy
everything
on
the
platform,
but
then
the
customer
started
to
to
think
about
okay.
So
how
can
I
replicate
my
old
architecture
of
front
and
middle
middle
wear
and
back
end
and
have
them
all
segregated?
B
So
we
started
to
say:
oh,
you
can
put
your
front
end
in
the
name
space
and
you
can
put
your
back
end
in
a
separate
namespace
and
then
you
can
only
create
connections
between
front
end
and
back
end,
but
the
customers
started
asking
oh,
but
I
want
to
control
in
more
granular
way
the
flows.
So
I
only
want
traffic
that
goes
to
tcp,
whatever
parts
to
be
allowed
and
everything
else
denied
and
things
like
network
policies
started
to
happen
in
both
kubernetes
upstream
and
openshift
networking.
B
So
it
was
a
I.
I
think
this
conversation
was
a
bit
tricky
in
the
beginning,
because
everybody
wanted
to
replicate
what
they
knew
and
they
didn't.
They
didn't
want
to
trust
the
the
sdn
they
wanted
to
control.
The
sdn,
so
I
think
it
was
a
funny
way
of
you
know,
seeing
how
even
like
network
admins
started
to
to
to
rely
more
on
software-defined
networking.
B
But
at
the
time
it
was
not
an
easy
conversation,
because
yeah.
A
Also,
the
sdn
sorry
also
the
sdn
adapted
to
that
to.
If
we
think
about
multus
today
yeah
it
allows
working
on
multiple
interface.
That
was
the
first
as
you
just
mentioned,
and
everyone
want
to
work
in
that
model.
So
two.
A
D
You
know
at
that
time
we
had
no
way
to
other
other
than
placing,
for
example,
the
static
routes
on
the
on
the
various
hosts,
and
we
ended
up
creating
another
ansible
playbook
that
we
usually
run
when
we
deploy
a
new
node
in
the
cluster
just
for
updating
all
the
stuff,
the
recurring
of
this
stuff.
At
that
time
there
was
also
we
started
also
working
on
satellite
six,
and
we
also
tried
to
to
place
some
of
these
rules
inside
satellite
through
puppet,
for
example.
D
So
there
was
also
a
mix
of
stuff,
because
again
you
the
only
one,
the
only
way
is
to
add
it
and
to
work
with
the
underlying
operating
system
and
for
managing
the
underlying
operating
system.
You
had
a
lot
of
tools
and
you
have
you
you
could
edit
it
without
any
limitation.
D
You
know
this
was
of
course
this
was
pretty
pretty
nice
and
advanced
for,
of
course,
advanced
user,
but
for
a
general
consumption,
let's
say
the
as
as
we
saw
as
as
we
said
previously,
the
chorus
introduction
and
the
operator
are
more
an
easier
method
to
do
it
actually.
C
Mentioned
like
what
legacy
infra
had
firewalls
ip
based
firewall
link
between
services.
Now
we
have
worked
at
in
the
kubernetes
environment
now.
The
next
phase
is
what
telcos
are
asking.
They
need
to
have
same
features
that
they
run
on
bare
metal
and
telecoil
environments.
They
need
to
have
an
sri
iov
interface
is
a
cpu
new
appearance
and
we
are
all
again
evolving
that
into
the
gubernators,
and
it
is
just
like
second,
and
maybe
there
will
be
third
and
fourth
and
fifth
stage
of
matching
the
coupon.
B
Yeah,
and
so
so,
this
reminds
me
of
something
else
regarding
the
traffic,
which
is,
I
remember
that,
but
the
router,
like
the
hk
proxy
router,
that
we
have
included,
which
didn't
exist
in
kubernetes,
was
already
a
major
feature
because
you,
basically
it
just
worked.
You
had
your
your
white
card.
Okay,
not
everybody
was
happy
with
using
a
white
card,
but
they
as
soon
as
they
understand
that
okay,
I
don't
have
to
create
100
entries
for
every
new
pub
or
new
service
that
I
deploy.
It
just
works,
they
say.
B
Yeah,
okay,
yeah,
but
so
so
what
what
was
funny
is
so
this
was
already
like
a
big
step.
But
when
you,
you
start
to
speak
with
some
customers
who,
like
in
the
banking
industry
where
they
have
some
strict
regulations
about
traffic,
etc.
B
B
And
and
if
you
remember,
like
the
router
shouting-
that's
that's
a
feature
that
came
a
bit
later
on
and
we
said:
yeah,
okay,
so
there's
a
use
case
that
maybe
we
can
address
and
we
started
deploying
dedicated
routers
to
dedicated
networks
and
so
yeah.
I
think
that
that
was
also
one
of
the
the
the
good
things
with
the
you
know.
Being
enterprise
ready
is
yeah,
you
take
those
requirements
from
customers
and
then
you
make
them.
B
E
That
is
true,
that
is
true
and,
as
alessandro
showed
us,
this
feature
that
you
were.
You
are
mentioning,
together
with
the
ability
to
build
an
application,
starting
from
an
iterable
theory
is
what
makes
able
an
enterprise
with
a
lot
deep
knowledge
or
about
an
emerging
technology
to
use
it
from
the
day
zero.
E
C
And
one
one
good
thing
to
add
to
the
english:
it
was
already
in
oppressive
two.
You
had
tls
termination
support,
so
you
could
because-
and
you
see
the
api
spec
there
was
no
secrets,
no
config
maps,
so
it
was
really
hard
to
actually
add
a
certificate
to
the
workload
we
actually
had
to
build
it
into
the
container,
which
is
not
nice
but
with
openshift.
You
could
do
the
tls
termination
and
on
the
ingress-
and
you
have.
A
C
B
Yeah
and
I
yeah
so
zero,
the
very,
very
interesting
thing.
So
do
you
remember
how
we
used
to
handle
like
rotation,
rotation
of
of
certificates.
C
B
Playbook
ansible
playbook
exactly
and
one
of
the
things
like
you,
you
go
to
your
cluster
and
nothing
works.
Oh
then,
that's
the
that
part
of
the
year
where
my
certificate
have
expired,
so
we
started
then
doing
things
like
ansible
playbook
that
tell
you
when
your
certificate
is
going
to
expire,
but
now
we
have
the
operator
where
you
just
change
your
certificate
and
the
operator
redeploys
it
instantly
and
reconfigures
the
router
and
maybe
even
other
routes.
B
So
I
remember
it
was
something
that
was
pretty
painful
because
you
had
to
to
pay
attention
very
closely
to
it
and
you
had
to
create
your
own
scripts
to
to
handle
that
and
we
listened
to
our
customers
pain
and
now
we
have
the
operator
that
does
that
and
that
can
handle
the
rotation.
So
I
I
believe
yeah
doing,
tls
and
and
and
certificates
was
really
a
good
feature
at
the
time,
but
I
think
now
with
operators.
I
prefer
the
way
it
works
today.
D
I'm
just
having
fun
with
interface.
Actually,
as
I
said
the
at
that
time,
we
had
all
these
details
in
this
web
interface.
D
Otherwise
you
have
to
to
grab
them
from
from
the
terminal
in
other
in
other
way,
and
I
just
spawned
my
sequel
container,
for
example,
and
at
that
time
we
also
had
the
persistent
volumes
and
the
persistent
volume
claim
inside
the
the
concept
of
open
shift
and
the
underlying
kubernetes.
D
And,
as
I
said,
I
show
I
chose
to
start
with
the
version
3.1
because
also
had
the
hipster
service
running
that
show
you
the
real-time
consumption
about
the
cpu
and
ram
and
very
nice
and
shiny,
shiny
graphs
on
the
on
the
web
interface
as
well
as
you
can
access
to
the
logs.
So
the
the
live
logs
from
the
from
the
containers
and
then
finally,
to
a
very
nice
terminal
where
you,
actually,
you
can
look
into
the
the
containers,
are
running
comments.
D
It
actually
simulates
the
oc
lsh
for
connecting
to
the
for
attaching
a
shell
to
the
the
same
namespace
of
the
underlying
containers.
So
it's
it
was
really
fun,
also,
but
also
complex,
as
I
said,
showing
this
stuff
to
the
customer.
But
at
least
the
the
web
interface
at
that
time
was
really
really.
D
And
for
business
consumption
and.
B
B
Yeah
and-
and
you
know
it
was
a
bit
tricky
if
you
wanted
to
get
custom,
metrics
and
or
build
custom
graphs
and
and
these
things
so
yeah
it
was
there
for
a
long
time
and
at
some
point
we
decided
to
to
switch
to
prometheus
and
graffana
based
metrics
and
monitoring
stack,
which
I
believe
was
also
a
good
shift,
because
if
you
think
about
the
way
red
hat
does
things
is
we
had
openshift
2?
There
was
a
breakthrough
technology,
but
nobody
was
using
it
which
was
kubernetes
and
they
said.
B
Oh,
I
think
it's
promising,
let's
switch,
even
if
we
have
to
rewrite
everything.
Let's
use
that
and
let's
use
docker
and
then
we
had
our
own
metrics
components.
We
were
contributing
to
hokular
and
and
such
things,
but
at
some
point
we
saw
that
there
was
promise
in
using
prometheus
and
we
decided
to
embrace
prometheus
and
contribute
to
to
it,
probably
because
also
of
the
acquisition
of
core
os
we
had
who
had
the
great
experience
with
prometheus
already
and
who
were
a
major
contributor
to
to
the
project.
B
But
what
I
like
about
openshift
story-
orchestra,
history
is,
is
we
don't
say
this?
Is
how
we
build
it
and
we're
going
to
stick
with
this?
If
there's
something
better
because
customers
say
it's
better
or
the
community
says
it's
better
openshift
evolved
to
to
change
its
stacks
and
and
come
up
with
with
something
more
adapted
to
to
the
use
cases.
A
Yeah
and
look
we
can
use
this.
This
is
a
very
nice
sentence
to
close
this
episode
of
today,
we're
going
into
the
end,
but
also
alexander,
you,
you
are
showing
the
topology
the
the
first
approach.
D
D
But,
finally,
before
closing,
I
want
to
also
show
you,
the
web
archive
page
for
the
openshift
origin,
open
source
project
at
that
time,
and
the
fact
that
we
also
distributed
an
only
one
vm,
just
mostly
like
the
mini
cube
or
a
mini
shift,
vm
that
you
use
or
again
the
the
container,
the
the
crc
for
the
yeah.
A
Yeah,
so
this
was
this
was
what
we
it
would
provide
before:
okay
d,
okay
kind
of
the
yeah,
the
gesture
of
okd
was
origin
and
what
the
absolute
verse
so.
B
C
A
Okay,
folks,
we
have
to
close
and
yeah.
We
have
seen
today
with
alessandro
and
matteo
or
the
founder.
We
said
the
history
of
open
machine
from
version
two
with
cartridge
gears
to
version
three
with
a
kubernetes
and
ansible
for
distillation.
Now
we
are
going
to
openshift
for
with
a
real
core
os
and
operators
so
and
as
jafar
was
saying,
we
are
improving
the
software
from
the
community
input
from
the
customer
input.
So
what's
next
we
don't
know,
but
we
would
like
to
hear
your
feedbacks.
Let's
close
this
session.
A
Listen
to
you
cannot
stop
sharing
your
screen.
We
have
some
little
reminder
to
do
today
at
openshifttv.
You
have
level
up
hour.
The
the
session
is
certified
container
pros,
then
ask
an
admin.
There
is
another
session
splat
and
very
problem
detector,
and
also
we
have
our
current
schedule
for
openshift
tv.
We
come
back
next
in
two
weeks,
so
next
one
is
to
june
10
am
together
with
jafar
siamak.
We
will
talk
about
tecton
in
action,
so
techton
live
demos
about
pipelines.
A
This
is
our
next
show.
So
look
looking
forward
for
to
our
next
show.
I
would
really
like
to
thank
alessandro
mateo
for
joining
us
today.
It
was
their
idea
was
great
and
I
already
tweeted
the
screenshot.
It
was
very
cool.
So
thank
you
folks,
for
joining
for
having
joined
us
and
yeah
talk
to
you
soon
on
openshifttv
ciao.