►
From YouTube: Kubernetes SIG On-Prem Meeting 20170111
Description
Inaugural meeting of Kubernetes Special Interest Group On-Prem
Agenda/minutes:
https://docs.google.com/document/d/1AHF1a8ni7iMOpUgDMcPKrLQCML5EMZUAwP4rro3P6sk/edit#heading=h.nrh4k3ck5icu
B
B
Ok,
ok,
cool
so
welcome
everyone
to
the
first
ever
see
konprann
meeting
so
I'm,
not
sure
we
need
some
introduction
to
what
this
stick
is
supposed
to
do.
Maybe
for
mentioning
so
we
want
to
cover
all
things
related
to
running
kubernetes,
on
/
metal,
on
premise
essentially
outside
of
the
cloud
provider
cloud
providers.
B
B
They
have
had
meetings
on
wednesdays
a
little
bit
later,
but
it
would
be
good
if
we,
if
we
share
the
the
bi
weekly
schedule,
so
next
meeting
would
be
weather
next
week
or
in
three
weeks,
probably
in
three
weeks,
so
any
comments
on
the
on
the
time
of
the
meeting
guys.
What
would
you
prefer
later
or
earlier?
I
don't
expect
earlier
works
perfect
this
time
to
start
at
night,
okay,
I
think
what
we'll
do
I
will
try
to
send
this
doodle
survey
and
we'll
see
what
is
the
response.
B
D
B
E
Sure
can
everybody
hear
me:
okay,
yes,
cool!
So
hey,
hey
folks!
This
is
this
is
pretty
exciting.
It's
a
new
continued
sig,
as
as
Zen
was
mentioning
and
pretty
pretty
interested
in
the
feedback
from
the
community.
I
think
we
got
a
lot
of
positive
reception
from
the
initial
message
that
went
out
on
the
mailing
list
from
a
variety
of
companies.
E
Looking
at
a
lot
of
the
drivers
for
this,
for
this
new
sig
and
expressing
their
interest
so
yeah,
maybe
maybe
we
can
just
have
an
open
discussion
generally
around
what
what
everyone
would
like
to
see
in
this
sig.
We
got
regarding
on-premise
efforts
and
deployments
of
Cooper
Nettie's,
particularly
behind
the
firewall
in
private
data
centers,
not
not
on
a
public
cloud
provider.
I
think
that's
very
broad.
E
So
we've
had
some
exchanges
in
the
google
group
about
what
what
would
what
would
be
in
scope
will
be
out
of
scope,
maybe
ensuring
that
we
don't
have
too
much
overlap
with
a
variety
of
other
cigs
that
have
narrow
focus
areas
with
specific
frameworks
like
OpenStack
or
cig
cluster
ops,
another
another
cigs.
It
would
be
worthwhile
outlining
a
few
things
that
can
really
focus
on
and
start
to
develop
some
work
items
out
of
so
so
that
we
can
be
productive
with
the
time
spent
here.
E
I
I
mean
I
have
a
few
ideas
and
opinions
here,
but
maybe
open
it
up
to
to
anyone
who's
on
the
call
anyone
who's
on
this
meeting
to
kind
of
share
what
you
know.
What
you
think
would
be
the
most
kind
of
crucial
things
for
us
to
focus
on
in
this
in
this
sig
regarding
Cooper,
Nettie's
and
on-premise
deployments,.
E
So
it
looks
like
we
have
an
in
and
out
kind
of
bullets
in
the
meeting
notes
here,
yeah.
If
anyone
has
any
anything
that
you
know,
you
think
might
not
be
useful
to
cover
or
would
be,
you
know,
out
of
scope,
it
looks
like
Zen
is
mentioning
kind
of
out
of
scope
would
be
on.
You
know
talking
about
OpenStack
deployments,
which
makes
a
lot
of
sense,
since
we
already
have
an
OpenStack
cig,
any
any
other
ideas
from
from
anyone
as
far
as
what
what
could
be
potentially
out
of
scope
or
in
scope
for
the
sick.
F
Hey
this
is
just
garrison.
Can
you
hear
me
yep
yeah?
I
just
want.
I
think
one
of
the
biggest
pain
points
for
on
prem
stuff
is
just
getting
started
and
getting
things
up
and
running
it's
hard,
because
everyone's
on
prem
environment
looks
different,
and
so
we
shouldn't
try
to
define
like
one
way
that
you
know
can
set
everything
up,
because
we're
never
going
to
get
a
consensus
on
an
OP
based
operating
system
and
network
layout
and
topology
and
everything
so
I
mean
we
should
have
some
best
practices
and
I.
F
Think
things
like
keep
admin
help
with
that,
where
it
kind
of
obscures
away
some
of
the
details
of
like
well
as
long
as
you
have
a
layer
to
connectivity
like
you
should
be
fine,
it
doesn't
matter
the
base
OS
as
much.
Some
of
that
stuff,
I
think,
is
a
better
focus
for
us
and
especially
things
that,
like
we
were
saying,
you
were
saying,
don't
overlap
with
other
areas
where
it's
like,
if
it's
a
cluster
life
cycle
like
make
sure
we
go
to
that
sig
for
like
how
do
I
do
upgrades
it's
like?
E
Yeah
I
mean,
I
think
this
is
that's.
That's
really
that's
really
a
well
put
just
tonight.
I
agree
with
everything
you
just
said.
I
mean
well,
I
think
one
of
the
initial
things
people
will
start
confronting
themselves
with
when
they
look
at
the
sig
like
if
we
care
about
the
success
of
this
thing
and
making
sure
that
it
actually
lasts
and
is
useful
to
a
lot
of
people
in
the
community.
E
You
know
a
what
what
what
all
the
focus
areas
are
for
say,
cluster
ops,
cluster
life
cycle
and
and
OpenStack
cigs,
but
maybe
it
makes
sense
to
just
confirm
where
the
most
overlap
is
with
other
cigs
and
then
meet
with
the
leaders
of
the
the
leads
of
those
other
cigs
and
kind
of
get.
You
know
three
or
four
points
from
all
those
others
legs
around
now.
E
This
is
where
you
should
be
going
for
these
specific
things
that
are
likely
to
not
be
discussed
or
or
or
worked
on
in
sig
on
print,
so
that
people
know
exactly
where
to
go
because
I
think
one
of
the
things
that
could
happen
out
of
this
is
just
a
lot
of
splintering
and
fragmentation
happening
and
I.
Think.
The
last
thing
we
all
want
is
a
confusion
resulting
out
of
like
people
not
knowing
what
the
cigs
are
really
for
so
I
mean
I'm
happy
to
take
the
action
on
that.
E
Maybe
then
you
and
I
can
do
this.
We
can
connect
with
the
other,
like
the
leads
of
these
other
cigs,
that
we
feel
where
the
most
overlap
is,
and
then
we
can
kind
of
collect
three
or
four
bullets
on
what
the
biggest
focus
areas
are
for
those
other
cigs
and
then
like
have
that
in
our
charter
or
a
mandate
as
a
preface
to
say
you
know,
for
people
interested
in
these
specific
areas.
You
should
go
to
these
other
six,
but
for
sig
on-prem
we're
going
to
be
focusing
on.
E
E
G
Hey
this
is
Spencer
you
guys
home
yep.
Ok,
I
was
thinking
like
another
thing.
I'd
like
to
see
a
little
bit
around.
I
know
we
can't
plan
too
much
about
network
layout
and
things
like
that.
But
I'd
like
to
see
some.
You
know
us
develop
some
documents
around
just
at
a
high
level
like
what
makes
sense
for
hardware
and
planning
and
that
kind
of
thing
right.
So
what's
a
production
grade,
cooper,
Nettie's
cluster,
look
like
you
know
how
many
api
server.
Should
you
anticipate
having
NCD
nodes
and
things
like
that
and
providing.
E
Yeah
that
makes
a
lot
of
sense,
I.
Think
there's
some
some
overlap
with,
maybe
the
broader
like.
How
do
we
define
what
is
production
mean
and
what
is
a
production
architecture?
Look
like
that
sort
of
cross
is
a
cross-cutting
thing
across
many
different
cigs,
but
specific
to
bare
metal
and
like
private
cloud,
stuff.
I.
Think
people
do
ask
that
question
a
lot.
E
I
couldn't
agree
more
Spencer
that
that
would
be
a
really
useful
thing
to
have
particularly
around
like
what
does
a
capacity
management
and
like
the
node
count
kind
of
planning
look
like
when
you
have
a
static
environment
that
you
can't
sort
of
auto
scale
as
dynamically
as
you
can
through
an
API
public
cloud
on.
You
know,
on
a
very
short-term
kind
of
basis,
Thanks.
F
I
think,
instead
of
also
defining
some
of
that
stuff
ourselves,
we
should
as
much
as
possible
link
out
to
the
documentation
for
those
projects
where
it's
like.
If
it's
SCD
like
they
have
a
bunch
of
documentation
on
this
is
a
you
know:
hi
available,
NCD
cluster.
This
is
these
are
ways
you
can
set
it
up
and
we
shouldn't
be
defining
that
ourselves
but
say
like
hey.
F
This
is
a
step
you're
going
to
need,
go
to
this
documentation
and
they
have
ways
of
you
know
running
it
with
in
containers
or
running
it
through
that
CD
operator
or
on.
You
know,
separate
notes.
However,
it
be
I
think
we
should
link
out
to
that
as
much
as
possible,
instead
of
trying
to
define
and
curate
that
ourselves
or
so
or
so,
I.
C
I
have
a
question
about
scope
in
general.
I
guess
is
if
this
is
on
Prem
and
we
interchange
on
prime
and
bare
metal
quite
a
bit,
but
we
mean
on
prem.
Are
we
also,
including
things
like?
I
don't
know,
vmware
and
potentially
other
virtual
zeta
platforms
like
zen
and
stuff
as
on-premise,
not
quite
bare
metal,
yeah.
E
The
discussion
would
happen
when
you're
sure,
but
that's
that's
a
really
good
question
like
I,
think
that
came
up
during
the
initial
exchanges,
so
I
think
it's
a
really
important
thing
to
clarify
mighty.
This
is
Joseph.
My
view
is,
it
covers
both
both
virtualization
and
bare
metal
nodes,
so
hypervisor
agnostic
and
then
just
any
physical
machine
that
doesn't
have
hypervisor.
It
just
hasn't
ellesse
that
sort
of
falls
into
the
on-prem
category,
lots
of
people
just
run.
You
know
Cooper
Nettie's,
on
on
a
bare
metal,
bare
metal
node
with
an
operating
system.
E
B
So
my
point
of
view
we
already
have
OpenStack
excluded
here.
I
think.
The
main
reason
is
that
we
have
a
Sikh,
OpenStack
and
4pm
where
I
remember,
there's
also
already.
There
was
not
too
much
of
a
feedback,
but
it
might,
you
know,
be
created
at
some
stage.
Another
story
is
I
think
just
recently,
ebay
released
their
cooper
at
his
story
and
they're
using
open
stuck
as
a
underneath
hardware
management
platform,
just
because
it's
mature
there,
so
they
don't
want
to
deal
with
that
inside
of
Copernicus.
So
well.
B
I'm
not
sure
if
we
should
cover
vmware
and
or
like
virtualization
platforms
in
general
it'd
be
good
idea
initially,
because
I
guess
lots
of
people
have
some
feedback
and
experience
already
at
some
stage.
We
might
just
you
know,
give
you
the
way
to
some
other
sick
if
it's
creative
but
I'm,
fine
to
start
with
that,
yeah.
G
I
can
see
there
being
value
and
you
know
providing
some
getting
started
guides
around
okay.
Well,
you
want
to
run
on
Prem.
You
know
if
you
want
to
do
OpenStack,
here's
pointers
to
the
OpenStack
stuff,
here's
pointers
to
the
vmware,
stuff,
etc,
etc.
Yeah,
I
think,
talking
about
private
cloud
at
a
high
level
is
fine
going
in
and
supporting
OpenStack
as
part
of
this
sig
is
probably
not
fun
right.
G
E
The
one
concern
I
have
around
the
virtualization
stuff
is
like
it
could
splinter
out,
like
I'd
hate,
to
see
sub
sub
sub
sections
or
sub
cigs
around
virtualization
providers
like
we're.
Having
were
like
we're
starting
to
see
with
cloud
providers
so
the
the
broad
trend
over
the
last
ninety
six
months
is
like
now
we
have
cigs
for
the
specific
infrastructure,
so
there's
I'm
not
sure
if
they're
sick
at
the
US-
but
you
know,
there's
their
sig
openstack,
there's
a
so.
The
major
cloud
providers
have
sort
of
dedicated
cigs.
E
If
we
start
to
get
into
the
domain
of
well.
Cooper
Nettie's
is
also
pretty
unique
on
a
virtualization
provider
basis
like
there
might
be
a
sig,
Zane
or
sig
sig
kva,
more
signi.
You
know
sig
vsphere,
sig,
sig,
VMware
or
something
which
I
think
is
a
bad
idea.
So,
but
then
that
also
kind
of
makes
makes
one
think
about.
If
we
exclude
the
virtualization
users
in
sig
on
Prem,
does
that
limit
our
sort
of
audience
and
participation
base?
E
F
My
opinion
I
think
that
encompassing
the
on
prem
or
whatever
virtualization
environment
isn't
already
provided
in
a
cig,
because
there
is
a
cig,
AWS
and
SEGA
OpenStack
and
I
think
it
can
be
encompassed
in
this
cig.
For
you
know,
hey
we
have
some
pointers
for
a
VMware
set
up.
Maybe
we
don't
we're
not
like
fully
running
with
it
and
defining
everything
and
like
startup
scripts
and
everything
like
that.
F
So
I
mean
that's
good,
where
we
can
have
a
more
generic
approach
and
even
things
like
crew
neighs
the
hard
way
where
it's
like.
Here's
the
very
basics
of
what
you
would
need
to
get
your
cluster
up
and
as
more
and
more
things
if
there
is
more
focus
around
one
area.
If
someone
wants
to
drive
it
that
they
can
go
off
and
do
their
own
thing
too,
you
know
nearly
focus
on
VMware
or
a
different
virtualizer
yeah.
B
I
think
we
should
be
careful
here
because
it's
already
covered,
for
example,
x,
cube
ad
right.
This
is
the
goal
for
keep
up
to
support.
So
maybe
we
should
more
concentrate
on
some.
As
you
know,
specific
cases,
but
to
be
honest,
I,
don't
see
any
specific
cases
for
environmental
upgrades.
Maybe
now
maybe
some
other
people
see
that
so
I
don't
know.
What's
what
what
are
your
thoughts
guys
about
the
upgrades
I
mean
if
we
want
to
take
care
about
upgrades
I
things
you
have
it
more
more.
I
F
I
agree
and
the
upgrade
story
is
changing
where,
if
you're
running
self-hosted
you
do
it
one
way
if
you're
running
cube
admin,
you
write
a
different
way:
re
provisioning
nodes
for
upgrades
or
are
you
doing
it
in
place?
I
think
that's
a
closer
lifecycle
thing,
at
least
for
now,
because
that
story
keeps
changing.
Reallys
have
to
release
and
even
though
15
to
16
is
already
going
to
have
like
downtime
migration
upgrades
for
as
83.
So
there's
a
lot
of
things
there.
That
I
don't
think
we
should
focus
on
also.
D
B
Cool
so
there's
another
thing
at
that:
I
don't
know
who
added
it,
but
I
remember
this
from
the
initial
conversations.
So
this
specific
item
is
improving
end-to-end
tests
order
on
premise,
but
I
think
it
should
be
a
little
bit
more
general,
like
you
know,
closing
the
gaps
that
we
adapt
54
for
the
the
on-premise
installations.
But
maybe
let's
talk
about
the
end-to-end
test,
I
think
it's.
I
Yeah,
this
is
brenton
from
Red
Hat
and
we
just
you
know,
hit
problems.
I,
don't
know
who
added
it
to
the
initial
announcement,
because
it
could
have
been
someone
else
from
Red
Hat.
It's
just
something
that
we
care
about.
If
there's
anything,
we
can
do
to
make
it
easier
for
developers
to
to
run
the
Indian
Test
against
bare
metal.
I
think
it
would
be
good.
I
know.
H
E
Yeah
I
think
that's
definitely
gonna
fall
and
to
scope.
Here
we
have
to
be
super
careful
because
that's
where
the
cross-cutting
dynamics
start
to
come
in,
it's
like
there's
dedicated
cigs,
for
so
many
of
the
infrastructure,
primitives
like
networking
and
compute,
and
an
instrumentation
and
so
on,
and
a
lot
of
those
complexities
haven't
really
been
surfaced
in
in
the
context
of
bare
metal.
E
So
we
could
talk
about
some
of
the
challenges
and
and
develop
efforts
to
make
people
more
comfortable
with
supported
best
practices
and
patterns
and
so
on
around
those
areas,
but
point
B
book
back
to
the
cigs
that
are
for
the
specific
for
those
specific
things.
Broadly
speaking
across
all
the
infrastructure
providers,
I
think
that's
going
to
be
an
interesting
balancing
act
with
this
much
much
of
it
is
probably
the
same,
though,
with
like
this,
the
cloud
provider
specific
cigs
likes
to
get
a
POS
or
signal.
E
B
We
said
one
here
who
is
also
attending
six
storage
and
could
tell
us
what's
the
current
situation,
I'm
not
another.
I
wasn't
there
for
sometimes
talk.
Are
those
guys
more
concentrating
on
storage
in
general,
or
is
it
more
from
the
Cooper
net
aside,
because
I
still
think
that
there
is
a
part
of
surrender
story
that
we
could
take
care
of,
like
you
know
how
to
optimally
deploying
saying
your
self
cluster
awesome
some
other
cluster,
so
anyone
or
of
what's
the
six
turret
is
doing
in
this
area.
I've.
B
D
B
A
A
K
E
Disagree
with
a
little
bit,
I
mean,
I
think,
with
with
providing
another
like
extension
control
plane
on
top
of
the
base
layer
to
networking
in
Coober.
Nettie's
requires
tools
to
be
configured
and
deployed
specific
to
on-premise
physical
networks
that
are
are
pretty
pretty
orthogonal
to
some
of
the
default.
Like
out-of-the-box
services,
you
get
on
a
public
cloud
like
v,
pcs
and
dynamic,
IP
allocations
and
and
and
so
on.
E
So,
but
I
think
more
of
the
gap
is
around
the
load
balancing
piece,
whether
you're
talking
about
like
connection
layer,
layer,
5
or
layer,
7,
we're
balancing
one
of
those
things
can
kind
of
very
I
think
it
goes
back
to
this.
The
cig
storage
one
is
like:
where
does
where
does
the
activity
overlap
with
sig?
Networking
I
haven't
really
been
to
informed
recently
around
what
signet
working
is
doing,
so
maybe
we
could
connect
with
them
and
figure
out
where
we're
things
overlap
as
well.
I.
F
Thought
about
that
a
little
bit
and
I
think
that's
not
directly.
Kunas
related
I
do
think.
There's
some
benefit
in
having
controllers
or
something
that
exposes
some
of
that
in
adds
details
either
labels
about
a
node
or
you
know
some
way
to
start
and
stop
nodes
through
a
controller
but
I
think
that's
a
lot
of
it
is
fairly
hardware
dependent
and
especially,
if
we're
taking
on
some
other.
F
You
know
running
in
VMware,
it's
it's
a
different
story
of
how
you
actually
get
like
a
console
to
something
or
turn
on
and
off
something,
because
you're
going
through
I
mean
granted.
A
lot
of
newer
hardware
has
api's
to
do
remote
turn
on
and
on
on
and
off
subsystems,
but
that
still
requires
a
lot
of
initial
setup
that
I
think,
would
just
be
too
much
details
and
too
much
noise
for
what
the
cig
should
focus
on
sure.
E
I'm,
a
little
indifferent
about
that
I.
Think
Dalton
makes
a
really
good
point
like
that.
That's
certainly
something
that
comes
up,
but
it's
maybe
solution
or
tools
specific
to
because
there's
a
bunch
of
different
systems
that
have
their
own
opinions
around,
how
you
create
a
spark
of
life
on
the
machine
and
interface
with
the
like
the
lights-out
management
stuff.
So
I
don't
know
well
what
I.
F
Do
think
we
can
cure
great
some
of
that
information
and,
if
you're
deploying
your
cluster,
you
know
bare
metal
and
you're
using,
say
tectonic.
Here's
how
you
can
interface
with
you
know:
but--
config,
to
wear
their
api
to
actually
like
start
the
machine
and
provision
its
without
a
band
management,
or
you
know
some
other
means
to
get
bios
information
and
that
kind
of
stuff
I
think
there
is
benefit
in
that
in
curating.
Some
of
that,
but
not
necessarily
providing
the
details
and
how
to's
ourselves.
J
That's
why
that's
why
it
may
be
actually
just
specific
to
my
particular
interests,
but
in
particular
about
redfish,
I'm
kind
of
wondering
if
Cuba
Nettie's
may
someday
actually
speak
to
redfish
api's
directly,
but
just
something
we've
been
kind
of
thinking
about
I,
don't
know
if
other
people
care
just
you
know,
potentially
to
label
nodes
in
particular
ways.
B
So
there's
already
some
war
going
on
around
cpu
features
and
automatic
leveling
up
now,
so
I
think
there's
a
place
for
for
this
story
as
well,
and
we
are
also
interested
in
that.
We
had
some
stories
when
we're
working
on
the
podrace.
We're
thinking
about
the
note
maintenance,
this
kind
of
stuff
and
also.
B
H
E
B
K
B
B
K
B
B
E
E
So
maybe
just
to
get
to
all
the
agenda
items
we
can,
we
can
close
out
on
sort
of
in
and
out
of
scope.
I
think
we
have
a
fair
bit
here.
We
can
always
revisit
this
a
little
bit
in
the
next
meeting
as
well,
but
we
have
sort
of
what's
the
working
cadence
from
various
participants,
maybe
maybe
that's
a
little
bit
redundant
I
think
we've
got
a
lot
of
interest
in
this
thing.
E
B
E
The
last
I
think
we
should
just
write
this
one
out
actually
I
think
will
organically
get
a
lot
of
support
for
various
things
that
pop
up,
so
we
can
just
delete
this
one
or
just
strike
it
out.
There's
a
document
that
I'm
not
sure
if
I
created
or
you
did
sin,
but
there's
sort
of
a
variety
of
efforts
out
there
open
source
projects.
I
think
is
the
thing
we
should
focus
on
here
for
kind
of
simplifying
the
deployment
in
the
management
of
running
urban
areas
on
bare
metal
I.
E
Think
it's
just
awesome
to
see
that
there's.
So
many
efforts
like
having
a
place
for
people
to
go
and
look
at
the
various
systems
that
are
out
there
would
be
useful.
So
if
everyone
in
the
meeting
here
could
just
go
to
that
dock
and
add,
add
projects
that
may
be
missing
or
or
add
some
color
to
the
description
of
the
description
of
the
description.
Descriptions
would
be
great,
I'm,
sure,
there's
more
than
that
are
listed
there
I
think.
There's
only
nine
that
are
listed,
there's,
probably
20,
plus
I,
would
be
surprised.
E
B
J
A
B
In
the
same
documentaires
there's
this
second
tab
with
the
current
Cooper
latest
features
that
are
in
progress
or
that
are
present
touching
only
on-premise
stuff.
So
it's
not
very
expensive.
So
if
you
guys
know
about
any
any
PRS
and
staff
proposals
concentrated
on
a
on
premise,
it
will
be
nice
to
pull
it
here.
I
think
the
next
step
would
be
to
try
to
identify
if
we
can
somehow
commit
to
that
as
a
seat.
I'll
have
some
work
that
we
trapped
inside
of
the
Sikh,
which
will
be
really
cool.
E
E
F
E
E
Yeah,
so
the
next
item
here
begin
process
of
organizing
best
practices,
I
think
best
practice
is
going
to
be
think
something
that
we
spend
a
fair
amount
of
time
on,
and
you
know
in
this
in
this
cig,
oh
I
mean
it's
great
to
see
Justin
from
from
disney
on.
This
is
an
end
user
and
pretty
active
in
the
community.
Do
you
have
any
thoughts
are
on
this
justin
from
like
a
like
best
practices,
standpoint,
what
you'd
like
to
see
or
others,
better
users.
F
I,
don't
know
I
mean
yeah
use.
Cases
are
always
a
little
different
on
on
bare
metal,
so
I
am
curious.
I
mean
just
going
through
some
of
the
projects,
even
that
the
spreadsheet
was
interesting
for
me
to
see,
because
I
didn't
know
some
of
them
that
existed
and
some
of
them,
I'm
not
sure
if
they're
actually
in
scope
for
this,
like
I,
don't
know
if
K
ops
can
do
or
metal
I
know
it
doesn't
amazon
and
they're
working
on
google,
but
I
don't
think
they
like.
It
works
bare
metal.
That's.
B
J
J
How
we
handle
that
I've
had
some
concerns
about
it
as
well,
because
I
think
technically
bash
scripting
works.
Hunt
you
know
is
is
a
thing
that
runs
on
any
host
rate,
and
it's
is
that
bare
metal
support
is
that.
Does
that
have
the
things
that
we
need?
It
might
be
good
to
define
some
like
minimal
expectations,
for
what
grace,
what
we
want
our
tools
to
actually
be
able
to
do.
I
think.
G
D
F
This
cool
thing
on
github,
you
know
I've
seen
dozens
of
github
repos
of
ansible
scripts
and
written
chef
and
saw,
and
all
these
things
I
was
like
hey
I
did
this
as
one
time
it
supports
this
one
version
of
this
one
OS
with
this
one
version
of
humanities
and
I,
don't
think
that's
the
way
to
provide
these
links.
Even
I've,
been
working
with
a
another
friend.
That's
been
setting
up
a
bare-metal
cluster
and
following
the
Cooper
names
documentation
for
one
built
on
sent
OS
and
the
scent
OS
one
was
really
out
of
date.
F
It
was
only
deploying
Korea,
he's
13
and
it
didn't
provide
a
proxy
or
DNS
server
or
I
was
like
wow.
This
is
really
terrible
documentation.
So
it's
like
I
submitted
PR
to
delete
the
page
because
it's
like
this
is
really
out
of
date
and
I.
Don't
think
we
should
have
people
find
these
pages
that
aren't
supported
are
maintained,
and
it's
just
a
you
know
one
off
like
hey
I.
Did
it
this
one
way
everyone
look
at
it.
E
B
B
Don't
know
it
was
there
already
what
about
the
reference
architecture?
I
know
it
was
mentioned
during
the
mailing
threats.
So
do
you
guys
think
it's
even
possible
to
define
some
reference
architecture
that
we
shoot
that
we
could
suggest
to
the
users
in
documentation,
or
is
it
like
to
why
the
venerian
to
even
try
it.
J
B
No,
no
no
I
was
I
was
thinking
more
about
the
Copernicus
architecture
to
question
it
because
I'm,
a
manly
I
mainly
running
on
bare
metal,
so
I'm
not
really
aware
of
how
people
to
play
it
on
FF
us
other
providers.
So
is
there
anything
specific
about
their
mettle
in
terms
of
architecture,
or
is
it
just
very
common
for
different
different?
F
The
cluster
ops
group
did
a
decent
job,
making
the
reference
architecture
generic
enough,
so
that
whether
you're
running
on
bare
metal
or
VMs-
or
you
know
it's
just
you
know-
for
a
high
available
cluster.
You
want,
you
know
three
monsters
behind
a
load
balancer.
What
that
load
balancer
is
we
don't
really
care,
but
it
should.
You
know
re
route
64
for
three
or
whatever
port,
to
get
to
the
master
and,
for
you
know
your
ingress
controller
or
your
service
load
balancer
whatever
that
may
be
it.
The
reference
architecture,
I
think,
is
generic
enough.
F
F
In
pointy
things
that
are
focusing
on
those
areas
where
we
can,
you
know,
collect
the
information
kind
of
aggregate
it.
When
someone
comes
and
says,
I
want
to
accumulate
on
bare
metal
like
okay.
Well,
look
at
this
diagram!
First,
like
we
don't.
You
know
necessarily
maintain
that,
but
this
sig
is
doing
all
the
work
there
and
then,
when
you
want
to
come
back
into
an
install
here's,
some
tools
that
you
can
do
that
install
when
you
want
to
do
an
upgrade
go
over
to
the
lifecycle
group
and
they'll
help.
B
A
Regarding
trying
to
straighten
a
road
map
right
yeah,
yes,
so
we
expect
them
to
have
some
defiant
like
this
draft.
It
roadmap
for
2017
from
from
ever
sake
and
from
the
stickers,
but
also
like
to
see
your
goals
for
2017
like
here,
expected
some
projects
to
be
implemented
in
it's
called
communities,
and
so
so,
if
you
can
take
a
look
at
like
different
road
maps
from
from
multiple
six,
for
example,
6
CLI
has
presented
their
roadmap
and
lost
communities
community
call,
so
you
may
prepare
this
or
no
one.
A
E
A
A
We
have
discussed
discretion
with
leaks
it
a
try,
Vincent
cloud
providers
like
seconded
Billy,
a
cig
OpenStack
like
not
established
signature
people,
so
the
main
request.
The
main
goal
for
them
was
not
so
not
preparing
some
code
base
but,
for
example,
preparing
some
reference
architecture,
I
suppose
looks
from
seek
on
premises.
You
can
also
provide
your
vision
up
to
reference
architecture
like
how,
like
ideal
communities
clusters
on
bare
metal
should
look
like
sound
like
some
such
an
my
god.
B
Yeah
so
I
think
we're
see
on
to
come
into
other
think
in
terms
of
cult.
You
know
from
Cooper
nothing's
right,
but
yeah
we'll
probably
try
to
discuss
as
high,
sometimes
meeting
minutes.
What
we
could
I
know
provide
as
a
our
improve
our
contribution
into
1.6.
So
what
is
the
deadline
of
the
data?
Well.
A
The
deadline
for
committing
new
features
is
jr
24.
You
should
also
remember
that
this
release
is
kind
of
stabilization
release,
so
you
know
crucial
new
features
will
be
accepted
here,
but
if
you
have
some
stuff
that
will
burn
it
can
be
clarified.
This
rental
stabilization
so
feel
free
to
20
submitted.
B
F
I'm
one
last
thing
that
I
was
trying
to
add
agenda
and
I
can't
edit
it
still
but
curious
if
we
want
to
it
all
cover
or
at
least
link
out
to
other,
like
projects
that
are
based
on
communities
that
run
on
prems
things
like
OpenStack
or
open
shift
or
I
know,
there's
more
and
more
that
our
are
becoming
like
that.
I
don't
I
don't
know
if
that's
any
scope
or
not,
but
it
I
know,
there's
people
that
may
work
their
way
to
this
sig
running
sort
of
environments,
okay
for
default.
F
For
me,
I
just
want
to
say
like
at
first
I'm
thinking,
no
because
whatever
he
was
running
that
you
know
if
it's
open
shift
or
whatever
it
is
that
you
should
have
support
from
a
different
community,
but
I
think
we
should
acknowledge
some
of
those
projects
that
people
are
running
that
know
are
running
on
Prem
that
aren't
necessarily
there
kubernetes
base.
You
can
use
some
of
the
communities
tools,
but
it's
probably
a
different
area
to
get
support.
B
Up
entirely
against,
to
be
honest,
this
is
well
does
this
will
be
kind
of
a
ria
where
we'll
have
kind
of
clash
of
interests
from
different
vendors
right
so
I,
you
know
we
probably
don't
want
to
make
the
sea
kind
of
place
where
vendors
will
be
arguing.
What's
better,
but
well,
as
you
said,
yeah
we
definitely
should
acknowledge,
and
also
painting
for
component
is
itself
is
to
show
that
there
are
interesting
projects
out
there
running
on
top
of
her
life
is
making
them
even
more
interesting.
E
B
F
Mean
I
I
do
think
it's
a
good
place
that
if
someone
wants
that,
you
know
follow
up
and
see
what's
out
there,
where,
like
a
10
in
communities
like
I,
don't
want
to
do
everything
you
know
get
a
solution
like
yes,
I
may
be
great
to
have
some
vendor
makers
to
do
with
that,
but
I
don't
want
to
again
broaden
the
scope
of
the
sake
too
much
where
it
just
loses.
Focus
and
kind
of
just
collects
links
from
everywhere.
B
E
We
could
just
summarize
some
of
the
actions
that
have
come
out
of
this,
so
it
sounds
like
Zen.
You
and
I
can
kind
of
connect
with
the
other
cigs
that
are
somewhat
over
lappy
and
qualify,
some
bard
guardrails
to
direct
people
and
they're
in
the
and
in
the
right
areas
of
the
community,
as
we
start
to
define
what
our
main
in
scope
areas
are
and
crystallized
that
that's
the
main
one
that
I
see
here,
I'm,
not
seeing
any
other
big
action
items
about
to
me
right
now.