►
From YouTube: Kubernetes SIG On-Prem Meeting 20170412
Description
Kubernetes Special Interest Group On-Prem Bi-Weekly meeting
Agenda/minutes:
https://docs.google.com/document/d/1AHF1a8ni7iMOpUgDMcPKrLQCML5EMZUAwP4rro3P6sk/edit#heading=h.nrh4k3ck5icu
Mailing list:
https://groups.google.com/forum/#!forum/kubernetes-sig-on-prem
A
A
A
To
seek
on
Prime
meeting
both
of
a
Perl,
we
don't
have
many
participants
and
who
is
probably
something
you
need
to
work
on
and
we
share
some
feedback
after
coupon
had
endless
talk
with
different
people.
So
what
we
have
on
the
agenda
today
is
actually
the
car
girl
demo,
my
maxima
song,
then
hoorka
castro
suggested
starting
with
fixing
documentation
on
premise.
Actually
it
doesn't
look
very,
very
good,
so
would
like
to
talk
about
that,
and
we
also
want
to
talk
about
cig
leadership,
so.
C
A
B
B
A
For
the
time
except,
I
was
also
planned
for
different
I,
most
of
the
guys
related
we
will
play
and
in
time
we
had
all
somehow
then
of
cargo
of
this
more
and
digital
rebar,
but
is
using
carbon
generally
and
people
were
asking
about
specific
cargo
features.
Development
will
point
on
there,
so
I
didn't
just
we
can
do
the
cargo
and
we
have
another
demos
in
two
weeks.
Probably
so
good
you
are
you
ready
to
yeah
yeah
to
change
the
shredder
screen.
F
F
F
Of
course,
when
you
deploying
covino
tease,
you
need
to
have
some
notes,
1080
different
roles,
you
need
masters
and
then
sexual
worker
nodes.
Some
people
call
minion
some
coke
just
call
it
a
node.
You
should
run
on
dr.
one
time
and
then
you
usually
want
so
networking
if
you're
using
a
multi-node
deployment
and
then
certificate
certificates
on
automatic.
F
You
have
to
really
set
them
up.
Unless
you
want
to
use
the
automated
system
cube,
FBI
server
itself
signed
one
which
I
don't
really
recommend.
I
got
this
from
wiki
commons.
It's
just
a
little
layout.
What
components
you
need!
Esu
d!
You
happy
server,
controller
manager,
scheduler,
that's
master
node!
You
can
learn
more
there,
but
you
need
that
plus
it's
not
here,
but
the
actual
network
plugin
runs
also
on
the
master,
because
they
need
to
be
able
to
look
up
internal
domains
and.
C
F
F
Under
ployment
is
incredibly
divisive,
I've
seen
a
couple
of
other
projects
that
were
turned
down
for
incubation
or
didn't
get
enough
attention,
because
it's
already
several
an
incubator
and
some
better.
How
do
you
say
grandfathered
in
that
are
not
even
active,
but
that's
another
story.
There's
only
one
official
story
for
deployment
and
that's
cube
idiom.
Everyone
else
is
sort
of
in
this
gray
area,
that's
or
cargo
fiction,
as
well,
inkay,
ops
and
some
others.
F
You
can't
run
everything
purely
inside
of
cooper
Nettie's,
except
for
the
super
blue
tube,
which
does
requires
services
outside
of
school
Renee's.
To
begin
with,
so
you
do
need
system
d
integration
somehow
to
get
everything
going
in
most
cases
and
the
way
to
do
that
is
another
agreed
upon
different,
develop
deployers
and
then.
Lastly,
it's
still
devices
because
Cuba
DM
is
still
not
ready
for
production.
They
fill
them
to
upgrades
or
AJ,
but
that's
one
of
their
main
goals
for
17.
F
They
didn't
want
to
do
before
because
they
wanted
to
get
more
teachers
in
before
they
had
to
lock
down
the
API
I,
make
it
more
restrictive,
so
they
had
good
reason
not
to
do
it,
but
now
it
doesn't
come
to
an
end.
So
cargo
is
an
playbook,
it's
not
a
script.
It's
not
a
playbook
plus
something
else.
It
is
an
Ansel
playbook.
You
run
ansible
playbook
command.
You
points
to
your
inventory.
You
point
to
play
book
called
cleft
suit
at
yamo
and
that's
it
that
sets
or
cargo
is.
F
You
can
set
a
bunch
of
options
to
to
to
configure
your
deployment,
for
example,
if
you
need
to
boost
up
your
hosts
with
Python,
you
have
to
specify
what
LS
you're
on,
because
that's
will
use
that
information
explicitly.
If
you
don't
have
Python
or
you
choose
your
neighbor
plugin,
you
choose
what
version
of
Liberty's
you
are
installed.
These
sort
of
things
are
configurable,
but
it's
just
a
playbook
one
big
benefit
that
we
had
a
lot
of
feedback
from
the
community.
Is
that
it's
readable?
So
maybe
it's
not
the
fastest
approach.
F
It
doesn't
get
everything
done
in
just
like
two
three
minutes,
but
it's
very
readable.
You
can
figure
out
what's
going
on
and
figure
out
why
it
failed
very
quickly,
because
absol
is
just
commands
that
are
wrapped
with
simple
resource
types
with
explicit
task
names.
We
do
cover
all
the
components
necessary
to
install
carbon
at
ease.
That's
GD,
coordinated
itself,
docker
optionally
rocket
as
a
container
engine
for
the
base
services
Cuba
s
UD.
F
That
is,
do
major
os's,
so
that's
Santa's,
rel,
ubuntu,
debian
and
core
OS
and
cargo
works
great
for
deploying
on
bare
metal
cloud,
VMS
and
so
on.
I
guess
not
so
on,
because
those
are
the
three
options.
So,
as
I
said
for
a
danceable
cross-platform,
you
can
even
have
heterogenous
nodes
running
different
os's.
Fine,
all
the
components
are
containerized.
There
is
no
just
single
binary.
Is
just
running
assistant
d
units
everything
runs
for
the
docker
run
command
will
actually
lock
it.
F
F
F
F
F
Does
dependency
management
pretty
well
as
well
in
each
part
of
the
Dipple
of
cargo
is
broken
down
to
roles,
so
you
can
choose
which
roles
you
want
to
use
for
you
toast
it's
pretty
simple.
I
can
show
that
after
the
slides
how
the
roles
are
broken
down,
this
slide
has
a
lot
of
text,
but
that's
the
entire
installation.
It's
a
Capri,
install
stage
where
we
kind
of
activate
ok,
where's,
all
the
up
to
the
end
points,
and
so
on.
You
know
what
are
APS
required
for
the
deployment.
What
are
their
host
names?
F
You
know
Phillip
se
host,
so
all
the
house
can
resolve
each
other.
Then
we
do
some
cloud
preparation
if
necessary,
if
there's
some
systems
that
are
set
up
by
an
azure
and
donna
GC.
That
kind
of
I
meant
to
protect
you,
but
really
they
get
in
the
way
Scoob
is
so
we
have
to
override
them
and
then
sold
docker
sort
of
certificates
set
up
a
CD
set
semesters
and
the
minions
and
then
add
on
some
add-ons
kind
of
a
gray
area
by
default.
All
we
deploy
is
dns
and
that's
it
NCI.
F
F
I'm
the
reason
we
do
this
network
chest
check
is
because,
during
early
steps
of
deployment,
there
were
certain
things
that
kind
of
broke
things
like
bridge
net
filtering
in
labor
networks,
for
example,
but
these
issues
happen
on
different
cloud
providers.
You
have
to
make
sure
that
you
know
encapsulated
traffic
can
travel
properly
throughout
your
cloud,
so
that
just
is
something
to
look
out
for,
and
it's
good
to
use
have
a
generic
tool
that
works
on
every
provider.
This
could
work
on
any
kobe
days,
cluster.
It's
not
specific
to
cargo.
F
So
one
other
thing
that
sets
cargo.
Apart
from
some
of
the
other
tools
out,
there
is
a
desk
for
that
bridge.
We
have
supported
upgrades
since
the
cargo
tag
to
a
one.
That's
since
december,
we
officially
certified
the
cargo
upgrades
from
every
release
from
that
really
from
every
release
into
a
one
till
card
master.
So
you
can
be
sure
if
you've
deployed
you
know
several
months
ago.
You
can,
you
know
just
check
out
those
master
appliance
table
and
and
be
reasonably
confident.
It's
going
to
work,
it'll
upgrade
kharku
benetti's
SED.
F
F
F
It
does
them
in
batches
so
that
you
can
cordon
and
then
drain
some
nodes
run
the
upgrade
on
those
hosts
and
then
move
to
the
next
group
and
as
well
mastered
notes
are
done
one
at
a
time
that
way
you
don't
lose
chloramine
SED,
you
don't
have
all
your
API
servers
go
down
which
would
cause
problems,
but
in
a
vacuum
with
no
real
workloads,
you
can
use
the
regular
playbook
to
upgrade
there's
a
chance.
Your
pods
get
rescheduled,
but
as
long
as
cube
late
doesn't
go
down
for
more
than
a
couple
of
minutes.
It's
fine!
F
So
theoretically
it
works
are
really
in
production.
You
should
use
the
the
safe
upgrade
cluster
playbook
we
have
in
cairo,
so
cargo
enables
high,
has
high
availability,
there's
only
two
components
really
that
you
need
to
be
accessible
by
clients,
ones,
SGD
it's
a
cluster.
We
prefer
you
have
three
nodes
should
be
an
odd
number
because
of
how
claw
marks
each
client
connects
to
every
edge
of
the
house
or
the
comma
separated
list.
We
have
an
MPLS
with
separate
search
for
each
host.
F
F
It's
a
little
bit
more
expensive
than
having
dedicated
a
dress
ever,
but
it
does
require
having
a
dedicated
virtual
IP
or
managing
it.
So
in
production
you
probably
want
your
own
load
balancer,
but
in
a
dev
environment
you
don't
have
to
do
extra
IP
management,
and
this
this
ends
up
working
out
pretty
well
for
people.
F
Moving
on
cargo
is
tested
scale,
so
we
have
done
this.
I
was
environmental
nodes.
It
does
take
some
time
to
deploy.
So
it's
not
a
lightning-fast
bullet,
but
it
does
work.
I
can
be
done
less
than
a
day
for
sure,
since
it's
danceable,
all
you
have
to
do
is
update
your
inventory
and
you
can
add
more
nodes
and
redeploy,
and
you
can
also
use
the
limit
option
to
specify
which
hosts
you
want
to
target.
You
don't
have
to
update
existing
house,
but
you
do
need
to
have
cash.
So
it's
know
some.
B
F
F
This
is
items
I've
already
talked
about
flexible
dns
means
we
have
just
a
vanilla,
cube
DNS
that
ships
with
Cooper
Nettie's,
but
we
also
have
a
setting
more
tuned
dienes
mask
that
runs
in
front
of
Q
genus.
This
will
filter
out
bogus
domains
that
can't
possibly
exist.
I
do
rely
on
search
domains
and
those
don't
those
force
clients
who
try
to
look
up
certain
domain
combinations
that
just
can't
exist.
So
this
really
reduces
a
lot
of.
Everyone
is
queries,
but
we
decided
not
to
retune.
F
You
Guinness
instead
set
a
second
layer,
it
doesn't
add,
really
overhead,
but
it
kind
of
keeps
the
the
complexity
down
and
so
I've
kind
of
race
through
this.
But
the
next
item
is
just
what
kind
of
is
on
our
road
map.
We
do
want
to
integrate
with
Cuba
DMT
vdm
now
has
support
for
phases.
Phases
means
you
can
do
just
set
up.
Q
config,
just
configure
cube,
lit
just
generate
certificates
or
fetch
a
certificate
for
a
given
node.
F
So
these
these
items
are
nice
and
it
allows
you
to
kind
of
adopt
different
parts
of
cube.
Idiom
piecemeal
as
you
want,
rather
than
all
or
nothing
other
items
that
are
that
are
really
desirable
from
our
user
base
is
to
improve
AWS
support
for
doing
shared
storage,
NE
WS
and
using
their
load,
balancer
and
so
on.
Rather
than
having
the
sin
of
your
own
for
an
FBI
server
connectivity.
F
We
have
any
contributed
playbook
for
Gloucester
FS
and
those
reports
that
staff
just
works.
If
you
have
interesting
stuff
cluster
you
can
just
drop
in
the
storage
can
fake
and
it
works,
but
we
don't
have
anything
really
documented
inside
the
repeal.
A
lot
of
people
who
use
these
publicado
deployers
really
want
to
run
and
then
confirm,
assess
because
most
basic
deployment
tools
may
not
be
convenient
for
their
patches.
Their
testing.
Can
you
multi-node.
Cargo
is
one
of
the
best
options
out
there,
just
because
it
works
everywhere,
so
bringing
the
last
point
we're.
F
Definitely
looking
for
more
contributors
to
cargo.
It's
a
balancing
project.
Its
incubation
have
sponsorship
from
google
for
running,
see,
I
really
expensive,
see.
I
I
didn't
mention
that
we
do
tests
other
idea,
combinations
of
deployment
there.
The
project
pages
here
we're
in
community
/
can
compound
car
go
check
it
out.
F
B
F
You're,
all
okay,
so
this
is
a
copy
of
what
we
do
for
CI.
This
is
not
bare
metal,
so
this
is
quite
inappropriate
for
sig
on
Prem,
but
it
is
a
recording
where
we
do
on
prom.
We
use
ansible
to
create
instant
systems.
You
see
these
GC
because
Google
gives
us
credit
to
test
their,
so
we
set
up
instances
with
the
standard
size
with
two
V
CPUs
and
four
gigs
and
then
I
believe,
but
you
can
get
by
on
as
little
as
one
point:
eight,
let
me
get
started
or
SSH.
F
F
F
B
F
Oh
yeah,
we
use
python
apps
to
configure
the
doctor
reflection.
What
for
so
in
slow
stalker
we
default
to
113,
even
though
the
latest
supported
one
is
112.
You
can
configure
that
and
specify
that.
Oh,
how
much
you
did
ask
do
we
blend
system
packages
we
do
sol
duc
hurt.
He's
is
managed
by
da
cargo
except
in
core
OS
scenario,
because
carro
I
should
have
it
pre-installed.
F
F
F
F
B
F
F
F
We
set
up
culet,
first
and
cube
proxy
to
proxy,
will
crash.
We
start
until
the
AK
servers
up,
which
is
ok,
just
create
some
static
in
the
logs
we
generate
certificates,
one
FBI
server
certificate,
one
admin
certificate
that
gets
used
for
cube,
config,
you
know
masters
and
then
one
per
host
for
Cuba
itself.
F
F
F
Moving
down,
calico
kalka
runs
the
system
d
Damon
the
same.
That
only
applies
to
council.
We
run
leave
and
flannel
and
canal
all
inside
of
urban
Eddie's,
as
Damon
said
they're
all
upgradable
either
way.
Calico,
though,
as
a
demons
that
can't
survive
upgrades
to
just
flow.
So
we're
still
doing
that
right
now
is
a
system
to
gaming
set
move
down
a
little
bit
so
T
and
I
after
we
said
it
turned
on
calico.
F
After
setting
up
each
of
the
static
files
for
the
master
of
the
q,
IP
server
scheduling,
co-managers
will
be
safe
and
be
up
just
in
case.
Something
that
follows
was
to
access
it
right
away,
just
slow
down
to
plan
a
little
bit,
but
we
don't.
We
avoid
race
conditions
that
way
this
for
for
calico
policy
controller.
If
you
want
to
use
that
as
well
as
calcio
route,
reflector
calcio
F
scale
is
in.
B
F
G
F
I
know
goes
down:
it
changes
the
state,
here's
DNS
a
function
we
have
two
most
regain
s1
is
where
so,
ideas
for
the
host,
where
we
figure
mm
the
host
to
point
to
the
dns
and
set
some
gauge
client
preferences
as
one
route.
Any
other
route
is
where
we
directly
inject
gayness
preferences
to
dr.
Damon
and
have
a
containers
get
their
fish
net.
Both
options
work,
but
the
first
option
is
vulnerable
to
other
services
to
touch
a
few
resolv.conf.
F
F
F
F
You
see
and
beef
also
plan
we
get
dance
mask
in
qns.
The
Vengeance
proxy
running,
just
on
non
left,
a
note
on
our
default
services
of
dance
mask
gdns,
the
default
community
guy
and
the
net
checker
and
the
checkers
just
for
showing
connectivity
and
it
reports
did
all
the
house
check
in
and
do
their
for
any
issues
kind
of
each
other
house.
A
F
Movies
marantis
there's
the
project
was
started
by
a
pair
of
french
and
gentlemen,
who
now
don't
work
for
those
companies,
one
hour,
sakura
west
and
the
other
dailymotion
there,
but
that's
not
related
to
their
contribution
to
cargo.
Some
people
from
dell
intel
just
a
track
n.
F
B
A
F
F
There's
repository
so
there's
these
sources
for
all
of
the
containers,
which
is
the
most
hopeful
queda
I,
oh,
but
you
can
just
have
some
hosts
dr.
pol
and
then
push
it
to
a
local
registry
and
then
just
update
your
config
with
your
registry
here
and
then
it'll
de
place
in
there,
and
that's
actually
recommended
strongly
at
scale.
A
F
Will
admit
there
is
a
breakdown,
and
part
of
it
is
the
number
of
tasks
there
are
and
how
many
things
which
back
to
that
just
turning
ansible
and
evaluating
things
is
about
a
third
of
the
time.
Another
third
is
pulling
ducks
containers
and
the
rest
is
an
actual
work
like
it's
changing
file,
config
starting
things
way
friend
be
ready.
B
A
A
A
To
the
obvious,
that's
a
lot
of
folks
here,
Thank
You
Marcus,
so
so
hoga
hoga
is
suggested
to
eventually
start
picking
the
documentation
and
it.
So
this
is
how
it
looks
now.
The
bare
metal
choices
so
definitely
there's
a
lot
of
choices
I
would
say,
and-
and
we
will
liven
take
it
and
make
some
more
structure
here
etc,
which
anyone
actually
interested
in
doing
this
work.
I
think
I
can
start
with
some
basic
structure
for
the
next
meeting
and
we
will
be
probably
looking
for
volunteers
to
do
this
work.
So
anyone
interested
in
this
three.
G
A
C
C
So
yeah
I
have
a
full
request
to
remove
it,
but
there
are
like
four
steps
they
want
to
take
for
anything
that
we
want
to
remove
completely
that's
to
go
through
like
a
couple.
Months
serves
deprecation
and
so
I
haven't
put
in
that
effort,
because
I
just
haven't
had
time
anymore,
but
yeah
there's
definitely
some
cleanup
that
needs
to
be
done
and
things
like
I
mean
there's
three
fedora,
we
install
it
and
it's
like.
Well,
maybe
those
can
be
consolidated
or
you
know,
I,
don't
even
see
like
cube
admin
on
this
list
and
that.
G
E
A
D
E
C
A
G
It
depends
on
whether
you're
kicking
the
tires
or
you're
actually
having
to
deploy
something
for
dev
tester,
prod
I.
Think
that
if
you're
just
kicking
the
tires
you're
going
to
find
whatever
you
know,
four
or
five
machines,
you
got
and
start
playing
with
things
and
seeing
if
you
can
get
it
up
and
I,
think
that
if
you
have
to
actually
deploy
something
that
you
start
with
some
kind
of
system
for
making
sure
that
you
know
all
your
machines
are
relatively
robust
and
consistent
before
you
start
start
going
at
it.
A
G
A
C
B
B
Distro
district
description
for
cargo,
for
example,
and
if
it's
a
first-year
specific
variables
are
for
the
music,
so
I,
not
speaking
about
today,
a
district
self,
but
about
when
the
concept
ideas
go.
What
is
what
is
the
bullet
point
in
showing
for
you,
the
first
slice
posits
district
district
Gresham
will.
G
E
A
C
D
B
E
A
E
A
C
A
All
right,
I
think
it
is
a
good
start.
I
will
I
will
talk
to
the
dog
steam.
Maybe
they
can
idea
and
I
will
have
to
organ
postponed,
email,
the
mailing
list,
and
we
can
start
there
yeah.
Definitely
this
is
something
way
to
way
to
fix
here,
because
yeah,
it
makes
the
impression
that
premise:
Copernicus
is
really
hard.
So
it
looks
like
early
days
of
minutes.
Okay,.
A
A
A
little
bit
running
out
of
time,
so
the
last
thing
on
the
agenda
is
actually
sick
leadership,
so
I
am
Not
sure
we
will
be
able
to
change
this
here,
probably
not
because
I
will
need
some
more
transparency,
because
we
need
to
go
through
the
mailing
list,
but
just
inject
initially
started
to
seek-
or
in
fact
he
moved
out
of
her
at
this.
What
IRA
spoke
to
him
acute
convert,
he's
working
in
the
company,
but
II
just
think.
We
cannot
devote
too
much
time
to
Cooper
like
this,
and
this
is
nice
running.
A
C
A
So
I
also
need
to
talk
too
far,
because
it
looks
like
there's
no
procedure
for
kind
of
doing
the
stuff
like,
like
you
know,
I,
like
being
the
cig,
leaks,
etc.
I
think
we'll
just
rely
on
on
some
agreement
on
the
mailing
list.
So
yeah,
please
think.
If
you're
interested
is
running
the
sick,
bringing
your
ideas
and
again
it
would
be
great
to
have
someone.
But
it's
really,
you
know
working
day
to
day
knows
the
community
knows
to
start
the
solution,
etc.
A
E
E
A
E
D
A
A
D
You
guys
so
that
means
on
your
linux
machine.
You
have
a
qk,
DM
install
and
there's
a
virtual
bridge
that
runs
rocket,
runs
or
darker
runs
network
services
like
your
dhcp
server,
your
dns
server,
a
matchbox
server.
It
creates
the
isolated
little
environment
where
you
can
test
out
different
components.
Oh
that's.
D
You
want
to
run
like
a
squid
proxy
down
there
if
you
want
to
run
like
some
other
component
like
set
up
the
vault
or
any
other
component,
that
you
would
normally
set
up
on
premise,
and
then
you
just
power
on
the
vm
that
are
attached
to
that
Network
and
see
what
happens.
So
in
our
case,
we
try
to
bring
them
up
into
a
whole
cluster.
Also.
D
C
E
A
I
know
I
don't
think
there's
such
thing
for
the
servers
I
think
there's
more
problems
with
that
approach.
Like
you
know,
we
don't
even
have
a
thunderous
wait
for
many
days
that
you
get
into
the
clouds.
Just
a
little
box
like
you
know,
DNN
stuff,
like
I,
don't
know,
lobe
answers
global
answers,
this
kind
of
stuff,
so
there.
D
Other
issue
I
found
when
I
was
running
an
Intel,
not
cluttered
to
start
developing
on
matchbox
is
you're
going
to
have
to
go
was
that
it
wasn't
similar
enough
to
production
phase
lacks
BMC's,
and
that
was
part
of
the
way
that
we
wanted
to
manage
those
machines
and
automated
way,
so
even
even
Intel,
knucks
sort
of
fell
short
and
a
way
into
the
real
hard
like
real,
like
a
real
on-premise
set
of
machines
that
we
have
at
the
office.
I.