►
From YouTube: OpenShift Commons En Vivo - KubeInit con Maria Bracho, Scott McCarty, and Carlos Camacho (Red Hat)
Description
OpenShift En Vivo
Episode 3
KubeInit: Buenas prácticas del ecosistema de OpenStack para mejorar la manera de desplegar OKD/OpenShift
Guest Speakers: Maria Bracho, Scott McCarty, Carlos Camacho
B
Yes,
of
course
well,
thank
you
very
much
for
inviting
me.
My
name
is
carlos
camacho
I,
actually
work
as
a
software
developer
and
well
in
principle.
I,
don't
work
with
openshift
nor
do
I
work
with
q
vernet
either,
but
I
am
part
of
the
openstack
development
community
es
es
one
of
the
products
that
we
also
work
with,
and
today's
talk
was
good
to
show
a
little
to
people
who
are
more
accustomed
to
the
development
of
the
opposite
and
of
turbinates,.
C
B
People
tend
to
get
very
used
to
a
single
turbinate
distribution
but,
for
example,
submariner
We
have
people
in
the
know
developing
it,
but
there
are
also
people,,
for
example,
from
ranchero,
and
the
continuous
integration
environment
right
now
is
only
based
on
openshift,
and
the
idea
is
to
extend
it
to
be
able
to
increase
the
scout
of
the
continuous
integration
environment
so
that
it
works
with
other
recurring
distributions.
But
yes,.
A
Talk
a
little
more
about
these
things,,
but
when
I
already
talked
with
Carlos
about
this,
I
think
he
had
more
or
less
the
same
question
that
you
were
asking
yourself,
that
is,.
How
is
it
related
that
comes,
for
example,,
the
insular
of
openshift
faith,,
but
You,
will
see
that
the
person
who
actually
calls
the
state
of
the
lot
is
not
that
there
was,
and
it
is
not
another
way
to
deploy
completely
separate
from
operation,
but
simply
calls
the
install
already.
B
B
A
C
B
In
principle,
houdini,
the
only
thing
it
is
is
a
collection
of
anncol,
a
collection
of
blue
that
provides
playbooks
and
roles
to
deploy
and
configure
multiple
cornet
distributions,
because
each
distribution
has
its
particular
installation
method
for
For
example,.
We
use
the
openshift
or
origin
installer
and
we
have
to
deploy
a
rap
road
bus,
from
there.
We
deploy
the
different
nodes,,
but
there
are
different
steps
prior
to
that
deployment,
for
example,.
B
We
have
to
set
up
the
dns
service
or
we
have
to
set
up
an
ax
proxy
to
be
able
to
balance
the
load
to
our
cluster
of
origin.
That
is
what,
in
principle,
made
it
easier
for
turin
and
all
those
steps
prior
to
executing
the
installer,
for
example.
Right
now
it
worked.
It
also
works
for
canonical
the
canonical
canonical
installer
is
not
juice,
It
doesn't
matter,.
B
Right
now
and
integrated,
four
corner
distributions
that
are
the
most
used,
from
my
point
of
view,.
It
is
something
that
is
quite
an
opinion
for
two,.
We
can
deploy
vanilla
cover
net
It's,
popping
openshift
as
such.
No,
the
project
is
based
on
distribution.
It's
like
that
I
don't
deploy
openshift,
but
I,
deploy;
ok,
I,
also
deploy
sánchez,
govern
et
des
engine
and
canonical
distribution,
or
write.
Ladies
and
at
4
distribution
right
now.
Anyone
well
could
could
take
off.
A
B
In
principle,
deployments
in
general
of
rulers
tend
to
be
complex.
complex
and
the
learning
curve
for
all
people
who
want
to
start
using
this
technology
is
usually
a
bit
traumatic
at
first.
There
is
a
lot
of
documentation
in
the
software
development
cycle.
It
is
very
short.
There
are
many
changes
very
fast
and
it
is
difficult
for
a
person
to
keep
up
both
documentation.
C
B
Idea
is
that
the
project
generates
the
live
documentation
later
I
'm,
going
to
show
you
that,
for
example,
the
project
documentation
is
generated
from
the
code
itself.
So
if
a
person
makes
a
change
and
that
change
already
has
documentation
within
the
code,
The
documentation
will
always
be
up
to
date,.
So
the
idea
is
that
people
do
not
need
documentation,.
That
would
be
ideal,,
but
in
case
you
want
to
read
it,.
It
should
be
as
up-to-date
as
possible.
B
Okay,
guys,
because
I've
been
involved
in
the
project
in
principle.,
working
on
other
projects
that
have
a
more
research
scope
and
well
I
had
the
need
to
define
a
deterministic
mechanism
that
would
allow
me
to
deploy
the
same
cluster
of
cuber
nets
and,
in
this
particular
case
of
origin,
maybe
150
or
200
times,.
Imagine
the
work
of
having
to
install
the
same
cluster
200
times
by
hand
following
the
documentation
does
not
scale
is
something
quite
complex
to
do,
especially
because
of
the
time
it
would
take.
B
This
is
done
in
a
way
that
we
can
repeat
the
experiment,
a
certain
number
of
times
to
guarantee
that
the
confidence
interval
of
our
results
is
within
a
certain
range
for
certain
clients.,
like
please
good
when
this
comes
up
from
June
2018
for
a
research
project
called
for
this.
The
pastor
is
a
framework
to
run
chaos,
engineering
tests
in
clusters
of
cornets
to
this,
where
I
review
for
many
months
now
and
I
hope
it's
good.
If
there
is
luck
this
year,
it
can
be
published
to
those
who
are
life
for
fun.
A
B
B
The
idea
of
the
presentation
is
that
in
real
time,
I
am
going
to
launch
a
demo
of
a
cluster
of
cuber
minds
And
when
the
deployment
is
finished,,
well,
we'll
see
the
terminal
again
and
I'll
show
the
extras.
Esther
María,
feel
free.
Please,
okay,,
as
you
can
see,
in
principle,.
It's
only
two
steps:
away.
Clone,
the
houdini
repository
is
on
github
and
run
a
playbook.
If
you
notice
the
playbook.
It
is
super
simple,
just
to
say
tree,
but
we
are
going
to
run
the
playbook
with
as
root.
B
We
pass
a
file
with
the
inventory
that
is
simply
the
description
of
the
machines
that
we
are
going
to
deploy
in
a
cognitive,
hypervisor
and
the
playbook
that
we
are
going
to
run
that
basically,
if
realize,
it's
ok
say
it
can
be
different
depending
on
the
distribution.
If
you
want
to
display
rulers
instead
of
ok,
say
about
8
s/a,
and
so
with
the
other
distributions,
then
I'm
going
to
share
my
screen
quickly
and
I'm,
going
to
run
the
variant
playbook
stole
the
screen.
B
Perfect,,
you
can
see
the
screen,
if
it's
sugar,,
I'm
perfect,.
So
in
principle
this
is
the
server
where
we
are
going
to
execute
the
playbook
and
Kevin's
repository
is
simply
cloned,
and
here
you
can
see
the
basic
structure
of
the
repository
and
what
we
are
going
to
execute
is
a
command
that
santiago,
the
playbook
I
am
root.
We
pass
the
inventory
and
the
playbook
that
we
are
going
to
execute.
That
depends
on
the
distribution
that
is
left.
We
are
going
to
deploy,
because
in
principle,
that
is
the
only
thing
we
have
to
do.
A
B
B
Exactly
exactly
the
idea
is
that
the
user
or
the
person
who
wants
to
start
working
with
this
does
not
have
the
need
to
make
any
changes,
in
principle,.
The
inventory
is
defined
in
such
a
way
that
no
changes
have
to
be
made.
Once
you
have
a
server
with
the
necessary
resources
to
deploy
it,,
you
can
deploy,
it,,
see
that
it
works,
and
if
you
need
to
make
any
changes,
afterward,
do
so,
but
but
not
before.
The
idea
is
that,
with
this
command,
you
can
have
the
environment
deployed.
That
is
good,.
We
can't
leave
it.
Yes,.
B
Okay
is
to
educate
file
is
because
they
noticed
I
execute
directly
the
command
that
runs
the
playbook
directly
from
the
server
that
happens.
This
is
a
machine
that
I
know
has
all
the
dependencies
to
be
able
to
run
the
playbook.
For
example,
and
you
have
a
laptop
and
you
haven't
been
the
installer,
you
won't
be
able
to
run
the
playbook,
so
these
are
things
that
if
they
are
in
the
documentation,
you
can
launch
the
playbook
from
a
container
a
container
that
has
all
the
necessary
dependencies
to
be
able
to
run.
B
The
playbook
is
basically
the
idea
of
the
s
touch
file
that
a
person
arrives
with
a
laptop,
for
example,
or
with
a
mac,
and
you
do
not
have
the
necessary
dependencies,.
It
is
simply
to
download
the
container
image,
mount
a
folder
where
the
repository
is
and
launch
the
playbook
and
in
this
case,.
If
you
want
to
see
how
it
is
done,
yes,
of
course,,
but
the
idea
is
that
you
do
not
depend
on
the
dependencies
of
the
computer
or
the
portal
that
you
are
using
to
launch
the
playbook.
C
B
Of
course,
in
this
case
right
now,
I'm
just
using
the
latest
version
of
the
ok
de
installer
to
be
able
to
deploy
to
smith,
something
that
could
be
changed
very,
very,
very
easily.
It
could
be
extended
because,
because
the
variable
that
defines
which
version
is
going
to
be
deployed
is
inside
the
ok
role
of
so
in
this
case,
and
you
don't
configure
his
playbook,
you
are
going
to
deploy
whatever
it
is.
That
has.
B
A
B
Course,
from
the
4
of
distribution
is
that
right
now
it
supports
it
can
be
deployed.
Now
it
is
from
box
later
without
making
any
changes
you
want
to
deploy
different
versions.
You
have
to
change
the
file
that
has
the
default
variables
where
that
version
to
be
installed
is
defined.
What
happens
all
the
nodes
are
downloaded
the
images
of
the
containers
at
the
beginning
of
where
it
is
defined
in
the
installer.
B
A
A
B
B
Ok,
perfect,
then:
ok,
well
going
a
little
further
back
I'm
going
to
talk
about
the
motivation
for
Kevin's
architecture
and
I'm
going
to
talk
about
where
it
is
taken
more
or
less
the
idea,
and
then
I
am
going
to
show
you
how
the
networks
are
configured,
how
the
nodes
of
each
distribution
are
configured
connect
well
in
principle,
the
idea
of
this
is
to
reuse
pieces
of
the
penta
ecosystem.
That
then
there
are
5.
Main
repositories
is
where,
yes,
they
realize
and
look
at
the
code.
They
will
see
that
there
are
many.
B
There
are
several
similarities.
We
have
first
trip
lohan
scioli
openstack.
They
have
been
where
the
architecture,
for
example,
of
how
the
documentation
is
generated,
is
exactly
the
same
as
the
project
documentation
that
is
generated
with
sphinx
It
is
the
default
documentation
system
of
aubenas
tactile,
and
if
you
see
the
structure
of
how
the
roles
are
organized
and
how
they
are
distributed
within
the
collection,,
you
will
see
that
the
idea
is
very
similar,
which
is,
for
example,.
B
We
install
the
dns
service,
but
That
dns
service
is
independent
of
the
distribution
that
we
are
going
to
install,.
So
the
idea
is
in
a
role
that
deploys
the
service
of
that
other
role
that
deploys
the
h
aprox
service
and
another
role
that
deploys,
for
example,
a
curne
test
distribution
in
particular
and,
for
example,,
another
role
that
deploys,
for
example,
the
nfs
service.
B
If
we
need
it,
the
distribution
that
we
are
going
to
take
off
from
triple-a
plate
and
the
support
of
the
molecule
tests,
if
you
realize
the
structure
is
quite
similar
within
the
club
init,
there
is
three
unit
functional
tests
test
between
molecule
tests
of
interest.
There
are
many
tests
that
are
good
that
somehow
they
live.
The
structure
is
very
similar
to
what
we
do
in
triple-a
today.
It
is
my
ride,
a
white
reid.
A
Practices
of
the
good
practices
of
the
openstack
ecosystem
and
that
are
born
from
projects
that
are
dedicated
to
making
simple
scripts
and
so
to
make
it
gates.
You
obviously
need
to
deploy
check
that
everything
is,,
let's
say,,
uploading,
ok
and
then
Being
able
to
continue
with
the
next
steps.
When
you
have
a
big
name,,
let's
say
if
you
get
stuck
in
one
step
at
the
beginning,,
everything
is
falling
in
sequence,,
so
this
whole
way
of
doing
tests
and
automating
the
certainty
and
redeploying
the
deployment
makes
a
lot
of
sense
to
me.
B
The
idea
is
that
we
have
invested
a
lot
of
time
in
designing
how
they
are
going
to
be
executed
from
mobile
writing
how
to
run
the
unit
tests.
Why
not
reuse
all
that
value
that
we
have
already
generated
for
something
new
is
basically
the
main
idea
of
these
well.
Here
I
am
going
to
show
you
a
little
bit
of
the
architecture
of
how
kevin
will
be
organized
and
that
they
notice
each
vertical
bar
It
is
a
different
role.
B
We
have
a
role
that
was
called
in
another
role
of
the
week,
the
proxy
another
role
called
nfs
another.
What
is
called
apache
web
service
for
free
at
the
foot
to
take
off
club
and
so
on
with
all
the
services
that
we
want
to
deploy,
because,
because
these
services
are
not
in
touch,,
they
are
not
together
with
any
cover
net
distribution,.
B
These
services
are
independent,,
so
it
does
not
make
sense
for
us
to
have
a
monolithic
architecture
if
what
we
want
is
to
be
able
to
reuse
the
components
so
that
to
guarantee
that
in
the
coverage
of
the
tests
of
each
one
of
these
components
is
greater,,
that
is
to
say,.
It
is
not
the
same
to
integrate
bain
in
each
distribution
of
cv,
vernet
separately
than
having
a
single
bain
module
and
reusing
it
in
all.
B
The
different
distributions
of
q
vernet
is
to
take
advantage
of
much
more
code,
and
it
is
much
easier
to
maintain,
and
then
we
have
another
layer
that,.
Even
if
it
is
a
horizontal
bar,
says,
cornetes
distribution
is
one
more
role,.
The
only
thing
is
that
what
this
role
does
is
call
other
roles
that
install
or
deploy
additional
services
and
from
the
Basques
from
the
distribution
of
toys,
well,.
We
have
one
more
role,.
B
B
C
C
Example,
the
most
that
customers
ask
me
is:
can
I
install
oettinger
team
book
I
want
to
use
I
just
want
use
that
ok,
more
or
less
and
that's
hard
because
they
don't
want
to
use
crc.
Sometimes
they
just
want
to
use
exactly
what
we're
going
to
install
on
a
server
more
or
less
if
they
want
to
practice
managing
it,
and
all
that
and
I
think
that's
fine.
I
really
This
is
also
interesting,.
B
C
B
Marking
or
of
course,
it
has
a
table
football
that
allows
me
to
deploy
certain
configurations
from
32
gigabytes
of
ram
to
250
GB
of
ram.
You
can
deploy
whatever
you
want
and
you
have
few
resources,
so
adjust
the
distribution
parameters
to
something
that
works
and,
as
you
have
more
resources,
after
delivering
more
nodes
or
they
can
have
more
disks
or
more
ram
or
whatever
you
want
the
beginning
of
the
conley
del
hypervisor.
We
support
this
fedora
debian
and
ubuntu.
What
happens?
B
Ok
from
there.
Well
what
he
starts
before
the
external
services
we
have
are
apache
web
proxy
dances
server
or
the
role
made
by
the
obligations
to
interlife,
please
to
the
documentation,
as
I
mentioned
before,
the
documentation
is
generated
using
espn,
which
is
the
default
documentation
system
in
all
openstack
projects.
B
In
principle,
kevin's
documentation
is
based
on
a
team
that
is
called
raid.
2
is
done
automatically
every
time
we
make
a
right
of
any
committee
in
the
repository
and
is
integrated
into
kitab
actions,
although
in
principle
as
this
is
a
project
that
does
not
have
resources
assigned
per
se,
I
don't
have
anywhere
where
I
can
run
continuous
integration
tests,
for
example,.
B
One
of
the
very
good
things
you
have
to
know
is
that,
for
all
the
projects
that
are
or
are
open,
it
is
to
use
it
to
follow
them
and
describe
practically,,
which
allows
the
quality
of
the
code
to
be
quite
much
better
while
like
please
how
the
roles
are
executed.
Well,
you
already
saw
that
executing
a
role
is
quite
quite
simple.
B
What
happens
to
imagine
that
now
they
want
to
create
a
new
one
when
they
arrive,
for
example,
scott
or
mariah,
and
they
say:
hey
carlos
I
have
been
working
with
fair
play
and
I
see
that
it
is
not
integrated.
I
want
to
add
a
new
role
and
I
tell
you
nothing
happens
in
the
documentation.
It
is
like
generating
a
new
role
automatically
using
an
annecy
web
playbook
called
hummer
edition
role.
B
It
is
based
on
java
zions,.
All
the
sej
link
jobs
are
executed.
Uneasily,,
there
are
linesmen
who
verify
the
style
of
the
code,.
There
are
three
unitarians
who
verify
the
payton
modules
inside
the
collection
of
assured,.
There
are
people
who
test
them,
I'm
going
to
talk
later
and
well.
We
have
three
demolition
and
I
prove
that
the
documentation
can
be
generated
automatically.
So
these
jobs
report
every
time
one
sends
to
a
public.
It
is
up
to
the
status
of
the
public.
B
It
is
and
in
principle,
if
every
day
passes
Well,
you
can
be
the
one
with
the
code
without
any
problem.
Do,
you
feel
light
orphans?
Well,.
What
I
was
saying
before,
a
summary
of
the
series,.
It
is
based
on
action,.
It
is
executed,
depending
on
whether
there
is
a
pulse,,
an
audience
is,
in
principle,
results
are
obtained
in
24
minutes,,
depending
of
the
me
that
is
going
to
be
executed
and
all
the
code
is
covered.
B
Do
you
feel
like
please?
Ok
now,
tests
on
tuenti
I
was
talking
to
you
about
the
site
we
can
deploy
to
ok
from
manila
rulers,
rancher
or
the
distribution
of
canonical.
That
is
jones
guitars
per
horse.
What
it
does
not
launch
a
virtual
machine
is
created
where
you
can
run,
but
but
the
resources
of
that
virtual
machine
footprint
are
limited,,
that
is
to
say
that
I
cannot
launch
a
cluster
of
eight
machines
in
that
virtual
machine
because
it
simply
does
not
have
resources
and
two
github
fail.
B
And
recommends
not
to
use
you
rogers
your
own
infrastructure
in
those
public
repositories,
because
they
cannot
guarantee
that
a
person
will
come
and
execute
the
malicious
code
in
your
internal
infrastructure.
So
that
is
something
that
is
so.
It
cannot
be
changed
until
further
notice.
What
happens
I
had
the
need
to
be
able
to
run
tests
on
your
on
hardware,
with
slightly
larger
footprint,
to
be
able
to
deploy
cornet
distributions,
then
integration
and
intuit
tests
are
not
based
on
and
bases
are
based
on
a
local
git
instance.
B
Ok
automatically
tell
me
that
it
is
running
which
is
running
in
git
lab,
which
notices
that
there
is
a
tag
which
was
added
by
a
person
with
admin
priority
and
runs
the
joe
and
returns
the
results
of
In.
This
way
we
guarantee
that
we
are
not
going
to
have
anyone
executing
malicious
code
within
our
local
infrastructure
and
in
the
same
way,
we
give
visibility
to
those
tube
jobs
so
that
they
report
in
the
public
public,.
B
But
there
are
two
scripts
that
are
called
lounge
in
total,,
which
is
the
payton
integration.
With
the
adegi
api
that
simply
executes
the
script
will
be
called
brawn
2
of
6.
There
is
a
check
that
checks
the
tags,
every
10
minutes,
and
if
the
rangers
finds
the
tag
it
will
configure
the
yo
run
the
yo
and
write
the
ox
results.
B
The
validations
in
the
validations
are
incredibly
important
to
help
users
who
are
starting
to
use
these
technologies
because,
for
example,
I've.
Had
people
ask
me,
"Hey,
does
the
playbook
work?"
and
it
didn't
work
for
them.
Because,
for
example,,
they
didn't
have
enough
disk
space
or
They
did
not
have
enough
ram
memory
in
the
intervisor
to
be
able
to
deploy
the
flash,
but
then
it
failed,.
That
is
something
that
is
partially
resolved
with
the
variations,.
A
series
of
variations
are
made
before
the
deployment,.
B
Okay,
now
I
am
going
to
tell
you
a
little
is
a
Addicted
is,
but
it
is
quite
brief
if
I
am
going
to
talk
about
the
network
architecture
of
the
basic
deployment
of
how
the
dns
service
is
configured
and
how
the
service
should
be
configured.
In
fact,
the
proxy,
which
are
the
basic
services
to
be
able
to
explain
among
the
three
how
You
can
see
here,
we
have
four
roles
and
I.
Do
not
this
explanation.
This
part
is
based
on
ok
and
because
it
is
what
I
usually
deploy.
B
If,
in
principle,
we
create
a
virtual
network
with
one
whose
identifiers
are
10000
24
and
there
we
are
going
to
place
all
our
machines
in
In
this
case,
we
have
four
types
of
machines.
We
have
luke
ridnour
the
service
snow
and
the
service
know
of
a
special
machine
where
we
are
going
to
install
apache
band
and
the
proxy
chip.
And
then
we
have
the
workers
and
the
and
masternaut
remember
that
in
principle,
a
cluster
of
rulers,
In
general,,
it
can
have
a
single.
B
Server,,
a
single
machine,,
a
single
masternaut
or
it
can
have
three
or
it
can
have
five
depending
on
whether
we
want
it
or
not,
or
because
of
the
cluster
of
this
CD.
That
will
be
working
there.
And
then
the
workers
cannot
have
a
de
0
to
n
In
principle,.
If
we
do
not
deploy
any
worker,,
the
master
mouse
will
process
and
be
work
clouds
and
that
network
that
we
create
has
by
default
hcp
as
it
has
hcp,.
B
B
B
Execute
your
cloud
and
then
with
the
fire
ignition,
make
boots
trap
of
the
other
different
nodes
that
have
the
three
working
using
the
procedures
that
are
in
the
documentation
of
kate,
and
so
they
realize
and
read
the
playbook.
They
will
be
able
to
see
that
this
follows
the
official
documentation.
If
simply
that,
I
have
been
with
nuanced,
with
action.
B
Of
course,
that
is
the
idea
of
having
something
that
maybe
it
can
be
useful
for
70%
of
the
people
who
want
to
try
it
and
can
have
quick
value,
for
example,
the
continuous
integration
system
of
the
submariner
guys,
and
they
don't
have
an
ax
in
the
naval
masters.
They
don't
care,
they
have
a
nou
master
and
two
workers.
Who
is
an
architect,
that
is
to
say
it
is
a
deployment
it
is
supported
by
this.
B
It
is
simply
issste
in
a
single
to
master
and
2
workers,
because
what
they
do
in
continuous
integration
tests
is
to
pull
down
a
worker
know
to
see
how
the
network
behaves
to
restore
the
messages
in
the
other
worker,
not
to
see
that
the
links
between
the
two
are
established
again
clusters,
and
that
is
what
they
want
to
try,.
So
for
example,,
this
architecture,
being
very
simple,,
serves
what
they
do,.
A
B
B
Thing
is
that
the
inventory
is
positioned
according
to
what
it
needs:
and,
well,,
another
box
that
I
wanted
to
tell
you
for
Default
networks
are
created
with
a
prefix
that
is
kevin
street.
It's
a
management
net
mgc
and
then
the
network
that
you
want
to
create
because
in
principle
you
can
deploy.
If
you
have
enough
footprint,
multiple
distributions
of
cuber
net
is
in
the
same
in
the
same
machine.
B
B
B
The
new
service-
and
it
is
the
machine
where
we
have
deployed
the
dns
service
to
elche
and
proxy
and
the
web
server
to
good
and
lnfs,
also
by
default.
This
machine
allows
us
to
access
the
resources
of
our
ruler
plates.
For
example,
let's
be
if
we
make
a
god.
If
you
want
to
see,
let's
go
Let's
see
how
good
it
is
that
we
have
deployed
three
masters,
one
worker,
and
how
good
the
masters
are
working
for
fifteen
minutes,
because
it's
the
first
thing
we
deploy
and
the
new
walker,
well,.
B
You
can
see
that
in
principle
the
cluster
messages
were
working
and
if
we
even
do
a
serious,
it
is,
for
example,
to
fulfill
that
it
is
one
of
the
roles
that
I
deploy
by
default,
because
it
has
been
doing
quite
a
lot
tests
with
it
and
well.
It
doesn't
seem
too
heavy
to
have
it
as
a
default
service,
because
we
will
be
able
to
see
in
principle
we
have
our
pods
working.
B
B
B
A
A
B
Clear,,
let's
see,,
the
super
is
deployed
by
default.
It
would
be
necessary
to
configure
the
different
play,
bears
of
the
virtual
machines
that
can
be
deployed
and
verify
the
football
of
the
cluster
server
that
we
have
just
deployed.
It
has
enough
capacity
to
be
able
to
host
the
virtual
machines
again,,
but
that
sightseeing
is
that
sai
zinc
job,,
you
can
do
it
once
you
have
everything
working
and
you
see
that
there
are
no
errors
that
can
display
a
virtual
machine
with
a
very
small
image.
B
that
you
can
connect
that
you
see
that
it
works
once
you
have
that
working
it
can
be
good.
Now,
as
the
best
we
are
going
to
adjust
the
racing
to
my
needs,
but
it
has
already
worked
out
of
time
box.
What
is
the
idea
of
this
well
ready
Maria?
If
you
can
share
the
screen
again,
we
finished
the
presentation
that
we
shouldn't
have
much
more
left
in
principle.
It
should
be
set
for
about
40,
45
minutes
and
I
had
more
or
less
30
ahead.
Not
since
it
starts
a
little
more
and
that's.
B
B
Another
thing
that
he
told
me
from
the
feedback
that
I
find
curious
because
I
have
I
have
had
is
to
prevent
it.
I
have
received
a
lot
of
feedback
on
this
project.
Is
that
what
happens
if
I
want
to
access
the
resources
that
I
have
just
deployed
in
the
cluster
from
outside
the
hypervisor,
because,
of
course,
I
deploy
the
ok
cluster
and
within
a
network
with
nat
from
outside
the
cluster
I
will
not
be
able
to
access
it
because
it
is
a
network,
nothing.
B
There
is
no
rule
that
allows
me
to
access
the
traffic
of
the
management
network
and
know
this,
because
it
is
very
simple:
we
have
a
single
machine.
That
is
the
service
that
should
have
access
from
outside
me
from
outside.
It
should
not
be
able
to
access
the
workers
or
the
masters
directly
and
not
the
exposed
resources
of
the
proxy
that
works
inside
the
ã.
They
are
going
to
do.
What
is
the
service
that
happens,
because
in
this
case,
this
is
also
in
the
documentation.
It
is
simply
creating
a
additional
bridge
and
I.
B
The
service
is
not
something
that
is
documented,
because
it
is
something
that
many
people
asked
me
once
I
fired
them
as
I
have
access
to
the
resources
that
deploy
the
cluster
and
well,.
In
this
way,
and
later
I
am
going
to
show
you
that
to
achieve
this,
our
dns
server
has
to
be
configured
in
a
slightly
more
special
way,,
such
as
having
external
zones
and
internal
zones..
B
B
What
happens
by
default?
The
dns
of
all
those
machines
are
in
alcatraz,
and
that
is
something
that
we
can
configure
in
liber.
We
simply
say
look
clean
and
you
get
a
request
for
the
domain
of
our
poster.
Redirect
that
request
to
service
know.
So
in
this
way
we
have
ketbi
working
for
us,,
redirecting
all
the
dns
requests
that
are
already
configured,.
Then
they
simply
realize
the
method
of
configuring.
The
dns
service
is
valid
for
any
cuber
net
distribution,.
It
is
because
the
dns
is
10,
I'm
going
to
So.
B
The
network
has
nothing
to
do
with
the
services
that
we
explained
within
the
cluster,
and
we
want
to
outsource
this
dns
service,
I,
don't
know
if
this
is
the
right
word,
because
I
have
my
own
dns
server,
with
authoritative
status
for
the
zone
that
I
go
to
to
deploy
in
the
cluster,
because
it
simply
adjusts
it
to
point
to
the
other
side,
and
that's
it.
The
next
ax,
the
proxy
ax,
is
a
record
from
within
tns.
B
B
Example
of
where
the
records
of
our
zone
point
to
if
it
is
within,
for
example,
the
internal
track,
all
the
requests.
For
example,
there
are
two
name
documents
that
will
point
to
1000
100,
but
if
it
is
a
dns
request
that
comes
through
the
interface
The
external
one
will
resolve
it
to
the
view
and
it
will
resolve
it
as
10
19,
41
159,,
which
is
the
IP
address,
that
we
are
going
to
assign
to
that
external
cloud
service
to
be
able
to
reach
it
from
outside
that
way,
come
on,.
B
This
is
ultra
super
The
basics,,
but
it's
something
quite
a
test,.
It's
something
standard
in
the
configuration
of
the
dance
definitions,
depending
on
where
the
inertia
requests
come
from.
Here.
This
is
an
example
of
how
the
internal
zone
is
configured
for
the
zone
of
our
cluster,
for
example,
in
the
case
from
or
katy,,
which
is
what
I
will
test
it.
Usually
if
they
realize,
and
a
wild
card
in
the
third
rule
is
an
asterisk
dot.
Apps
two
men
from
the
domain.
B
B
The
dns
service
will
automatically
point
all
the
requests
to
our
services,,
but
we
will
be
able
to
create
applications
that
use
efe,,
wealthy,,
public
af,
and
fire
domain
names
in
our
cluster
because
automatically
and
by
default,
bain
does
it.
It's
going
to
support
it,
we're
going
to
have
to
put
it
inside
a
more.
Do.
You
want
each
application
that
we
create
with
the
IP?
In
principle,.
This
is
something.
A
B
Supports
20,
natively
and
no,.
We
do
n't
have
to
do
anything.
Else,
configuring,
the
proxy
ax,
well,
configuration
of
the
proxy
ax,
is
also
quite
basic.
As
you
can
see,
we
only
have
four
game
points.
We
have
two
bridges
for
English
traffic
and
another
for
management
traffic.
We
have
for
the
wave
tape
server.
B
In
the
same
way,
the
configuration
of
this
is
standard,
h,
approx,
and
it
is
configured
in
this
way
that
it
happens
that
the
configuration
is
quite
freaky
depending
on
If.
We
have
a
lot
of
time
without
many
machines,
but
it
is
automated,.
Then,
in
this
way
we
can
have
an
initial
configuration
that
is
consistent
and
correct
and
from
there,
well,
we
could
extend
it
for
ourselves.
B
The
architecture
of
the
project
supports
it,
and
it
is
even
automated
the
way
to
create
additional
roles
for
whatever
we
want
to
integrate.
Right
now
is
not
here
in
the
presentation,
but
I
have
a
role
that
in
fact,
I
meditated
for
a
few
hours
to
deploy
submarine
is
a
service,
the
additional
which
is
and
integrated
to
be
used
within
the
submariner
headquarters.
But
in
principle
this
would
be
a
fully
functional
service
that
if
a
person
wanted
to
deploy
two
clusters
and
connected,
they
should
not
be.
B
And
in
3
life
for
fun
and
well
in
principle,
this
has
been
all
the
presentation.
The
bulk
I
have
told
you
a
little
about
why
how
the
initial
prototype
is
made,
how
it
works
and
well
the
idea
of
the
project
is
to
be
as
agile
as
possible.
If
the
idea
is
not
that
everyone
realizes,
there
are
many
code
mails
within
the
code,
but
but
the
functional
code,
that
is
to
say,.
You
will
be
able
to
see
within
the
code
that
there
is
some
role
that,
a
being
with
a
write
batch
has,.
B
B
B
Surprisingly
enough
people
have
written
to
me,
they
are
using
it
and
well.
I
already
have
the
7th
step,
how
to
add
to
other
distributions
as
witty
but
as
vanilla
chords,
but
this
is
quite
old
and
I
no
longer
have
only
connected,
but
three
more
distributions
so
well.
That
shows
that
it
has
There
has
been
some
progress.
B
I
have
that
is
super
interesting
is
the
deployment
of
clusters
that
are
disconnected
from
the
network,
in
that
both
the
new
workers
and
the
masters
do
not
have
to
download
all
the
images
of
the
containers
from
the
internet
in
case
of
use
of
bar
or
mine
in
particular,
is
that
they
have
a
server
with
64
gigabytes
of
ram
connected
by
adsl
that
happens
to
deploy
the
cluster.
Only
using
the
adsl
bandwidth
takes
three
and
a
half
hours,
because
you
have
to
download
a
lot
of
content
such
then.
The
The
idea
of
this
would.
A
C
A
B
For
example,
the
idea
the
idea
would
be
to
create
a
role
that
allows
deploying
a
registry
within
the
service
moto,
where
you
want
to
take
it
off,
for
example,
for
the
continuous
integration
system
of
submariner.
We
have
to
have
a
local
registry,
because
we
have
to
generate
the
images
of
the
containers
that
we
are
going
to
use
within
the
cluster.
B
Then
it
is
something
that
is
not
there
right
now,
but
I
suppose
that
in
the
next
few
weeks
and
I
want
kevin
and
it
can
be
used
in
the
continuous
integration
environment
of
submariner,
it
has
to
be
good.
That
is
something
that
is
not
yet
there,
but,
but
we
are
going
to
be
working
progress.
It
would
be.
A
B
Another
thing
is
that,
for
example,
there
are
a
lot
of
bugs.
There
is
a
lot
of
code
and
the
support,
for
example,
of
the
tests
is
quite
lax.
Sometimes
it
would
be
quite
interesting
to
add
more
content
within
the
integration
tests,
for
example,
of
functional
tests
or
unit
tests.
The
problem
is
that
it
takes
a
lot
of
time.
B
I
Right
now,.
Most
of
the
weight
of
the
project's
tests
is
based
on
the
tests
in
Tuenti
and
I
manage
to
deploy
the
cluster.
I,
see
that
all
the
services
are
up
for
me,
it
works,
what
happens,
it's
important,
that's
what
the
molecule
tests
are.
For.
Each
service
can
be
opened
and
tested
independently
because
that
load
can
be
executed
within
15
minutes,.
So
it
doesn't
make
sense
for
me
to
invest
an
hour
and
I
can
get
feedback
in
5
minutes
using
the
series,,
but
well
little
by
little,.
B
B
You
need
anything,
you've,
liked
the
presentation
and
can
help
with
something
as
simple
as
try
the
tool
and
give
feedback,
and
also,
if
you
want
to
be
in
contact
or
receive
updates
that
give
the
project
a
little
star,
well,.
If
I
need
to
send
some
information
or
something,
well,,
but
use
the
list
of
people
who
are
there
and
a
little
more.
Thank.
B
Since
I
started
by
myself-
and
today,
there
are
54
of
us
45
people
that
we
have
sent
from
a
pull
request
and
many
stamps
and
comments
on
my
blog
of
the
documentation
that
I
am
drawing
about
all
that
people
try
to
prove
what
is
wrong.
They
ask
me
I
answer
them.
They
say
is
that
we
do
not
have
really
efficient.
Okay,
hey.
There
is
something
that
we
are
not
doing.
Well,.
They
are
usually
very
specific
things,.
So
well
so
far
me
and
happy.