►
From YouTube: OKD Working Group 2019 10 29 Full Meeting Recording
Description
OKD Working Group 2019 10 29
Full Meeting Recording
co-chairs: Diane Mueller, Daniel Comea, and Christian Glombeck
A
All
right,
everybody
welcome
again
to
another
okay,
he
working
group,
I'm
gonna
drive
this
via
the
community
page.
So
if
you've
got
other
additions
to
add
to
this
agenda,
please
let
me
know
as
well
in
the
chat
you'll
see
a
link
to
this
wonderful
document
which
you
are
all
wonderfully
filling
out
with.
If
you
can
add
the
attendee
stuff.
A
So
let
me
know
if
they're
coming
koukin,
if
you
could
do
that-
maybe
in
this
section
here
on
the
meetings
page,
if
you're
going
to
be
there,
let
me
know,
as
I'm
trying
I'll
probably
set
up
a
duel,
because
we
have
a
room
set
aside
for
us
at
KU
con.
It's
at
The
Gaslight,
mariya
across
the
street
from
the
Convention
Center.
It
only
holds
15
people,
but
I
thought
we
could
at
least
do
meet-and-greet
there
or
maybe
have
office
hours
them
at
KU
con.
A
So
I'd
like
to
find
out
who
from
the
working
group
is
going
to
be
there.
So
maybe
we
can
coordinate
x
as
well
as
well
everybody's
filling
in
there
agenda
items
we
had
a
speaking
slot
set
aside
for
the
London
open
shift,
Commons
gathering
on
January,
29th
and
I.
Think
Christian
you've
been
tapped
to
do
part
of
that
by
a
Tanya
repo
who's,
the
organizer
this
time
for
that,
and
so.
B
A
A
Right
so
what
yeah
so
I,
I'm
gonna,
try
and
each
meeting
add
in
these
things
and
if
people
have
other
events,
I'm
really
trying
to
figure
out
ways
to
get
the
messaging
out
about
okay
D
and
to
build
up
this
working
group
with
some
more
external
voices.
A
So
if
you
can
get
yourself
to
food
con,
that
would
be
great
for
those
meet
and
greets
Clayton
I
know
you
said
you
were
going
to
be
there
and
yes,
definitely
Def
Con
DZ.
We
should
have
a
presence
at
I'm
planning
on
attending
that
because
one
of
my
favorite
things
to
do
so.
If
there
are
other
events,
please
just
add
them
in
there
and
what
I'll
do
is
I.
Think
I'm
gonna
use
the
projects
page
here
to
kind
of
list
out
all
the
upcoming
events
and
see
who
is
available
to
just
speak
to
these
topics.
A
So
you
can,
let
me
know-
and
let
us
know
if
you're
available
and
have
permission
to
and
funding
to
travel.
That
would
be
great
and
Danny.
I
know
you're
busy
because
you've
just
joined
a
new
company,
but
it's
a
little
crazy
right
now
for
you,
but
that's
what
what
I'm
looking
for
is
to
get
a
little
okay
d,
even
if
it's
just
a
five
minute
lightning
talk
at
each
of
these
events,
probably
at
dev
comp
we
can
get
a
whole
speaking
slot
figure
out
what
the
CFP
is
for.
A
A
Yeah,
so
you
know-
and
the
other
thing
is,
if
we're
doing
week,
if
there's
meetups
in
your
area
or
something
like
that,
what
I'd
like
to
do
I
know
I'm
on
the
hook
for
a
one-page
slide,
the
update,
at
least
internally
to
Red
Hat,
but
should
be
able
to
be
able
to
use
externally
as
well
on
the
roadmap
for
okay
Dee
get
that
done.
I'll
circulate
that
to
this
group
as
well,
so
you
can
give
me
feedback
on
it.
Make
sure
I
got
it
correct.
A
It
didn't
over
promise
anything,
but
what
I'd
like
to
do
is
at
least
have
like
a
standing
deck
that
we
can
use.
It
has
the
roadmap
and
how
to
get
involved
that
anyone
who's
doing
a
meet-up
can
just
use
that
deck
and
get
that
up
on
into
the
the
github
repo,
so
that
there's
a
standing
capability
and
just
really
try
and
get
the
word
out
a
little
bit
more.
A
What
we're
doing
here
and
the
timeline
is
in
the
roadmap,
so
we
get
some
good
external
feedback
on
it
as
well,
so
that
that's
what
I
had
in
terms
of
upcoming
events
and
so
in
the
interest
of
time
I'm
going
to
let
Christian
give
us
now
the
okd
update.
We
have
the
little
start
in
here
and
I
know.
We
have
Steve
party
on
deck
to
do
a.
A
D
E
F
A
Fine,
we
just
need
to
have
a
connection
with
a
group
and
make
sure
we're
we're
in
sync
and
know
what
you
guys
are
doing
so
that'll
be
perfect,
so
the
Christian,
why
don't
you
take
it
away
and
give
us
a
bit
of
an
update
on
what
okd
is
doing
and
if
anyone
else
says
other
agenda
items,
just
please
add
them
in
we'll
just
rock
and
roll
here.
B
All
right
so
I
just
pasted
a
link
to
our
project
Kanban
boards
for
okt,
for
on
the
open
shipment
organization,
and
essentially,
we
have
lots
of
wheels.
Turning
at
the
moment
really
making
progress,
so
Clayton
body
and
I
have
been
certainly
Clayton
and
boddyhm
have
done
lots
of
work
to
getting
getting
the
the
okay
DCI
set
up
on
prowl.
So
we're
working
on
that
right
now
and
very
soon,
I
think
we
will
have
yeah
a
continuously
builds
machine,
OS
content
and
derived
from
that.
B
B
Yeah
I
mean
that's
your
pictures,
great
thanks
again
for
that.
If
you
look
in
the
chat,
oh
really
will
we're
getting
closer
to
actually
getting
an
alpha
release
out
yeah
right
now
we're
we're
building
all
the
the
RP
like
we
have
around
F
cost
branches
in
the
MCO
and
the
Machine
config
operator
and
in
the
Installer
repos
from
which
we're
going
to
be
building
our
components
with
spec
ignition
spec
3
support
the
photo
timer
you
back
into.
B
A
Is
that
something
people
can
play
with
like
in
two
weeks
time?
Is
that
what
you're
thinking,
if
that,
by
the
next
meeting
by.
B
The
next
meeting
I
hope
we
can
get
most
of
the
CIA
stuff
set
up
this
week.
I'm
not
sure
if
there's
lots
of
work
going
on
with
our
three
release
as
well,
though.
Yes,
it's
coming
very
soon,
I
can't
really
say
what
day,
but
it'll
be
very
soon
I
hope
by
next
week
we
will
have
something,
so
my
next
meeting
I'm
really
sure
I'm
pretty
sure
that
we'll
we'll
really
have
something
for
everybody
to
test
yeah
it'll.
E
C
It
would
be
all
the
master
branches
and,
and
then
there'd
be
the
three
repos
or
two
repos
that
we'll
have
F
cos
branches
which
those
F
cos
branches
would
just
stitch
in
and
then
probably
the
next
question
after
that
is
how
quickly
Kim
cuz
like
the
ignition
v3
stuff
getting
in
is
like
a
has
to
be
done
across
the
multiple
repos.
So
as
Christian
just
pasted
yeah,
it's
the
it's
basically
master,
plus
spec
three,
but
it'll
trail
a
little
bit.
C
B
F
Hey
Christians,
something
that
that
I've
been
curious
about
at
you
know,
watching
the
journey
from
the
three
dot
X
family
to
the
to
the
four
dot
X
family
is
in
within
the
origin,
repo,
that's
in
in
github
right
now,
the
there
were
pretty
clear
instructions
in
the
hacked
MD
and
the
contributing
dot.
It
was
not
a
mark.
It
was
a
asciidoc
about
building
each
of
the
components
from
which
you
would
then
assemble
an
open
ship
cluster.
That's
obviously
changed
with
the
four
dot
X
family,
because
those
those
build
instructions
no
longer
work.
F
One
of
the
things
I'd
be
really
curious
about
that
might
actually
enable
some
of
us
outside
of
the
the
direct
Red
Hat
community
to
to
assist
in
in
minor
ways
would
be
a
refresh
of
those
billed
instructions.
How
the
new
process
works
to
build
the
new
containers,
the
new
rpms,
the
the
components
that
then
go
to
make
up
an
open
shift
and
cluster
that
the
installer
itself
is
now
pulling
rather
than
you
know,
the
the
old
style
of
the
ansible
playbooks.
B
B
F
C
I
think
that's
an
area
to
where
it
can
help
to
have
other
folks
get
feedback
as
well.
Right
now,
most
of
the
image
builds
are
done
by
like
there's
a
component
within
the
CI
infrastructure
that
layers
and
builds
and
images.
It
is
also
possible
to
build
those
using
straight-up.
Docker
builds,
but
it's
a
lot
of
wiring
stuff
together,
so
it
could
be
that
we
still
have
a
gap
on
the
tooling
side
to
make
it
easy
for
someone
to
rebuild
all
of
them
themselves.
C
It's
a
little
bit
like
you
know.
If
you
want
to
build
CI
or
tooling
that
rebuilds
all
of
what's
in
fedora,
without
using
the
fedora
tools,
you'd,
be
you
know,
running
a
bunch
of
scripting
across
a
lot
of
sources.
I
think
there's
the
we
have
that
gap
right
now,
where
we
don't
quite
have
a
tool
that
says
rebuild
all
of
the
things
from
master
and
stitch
them
together,
because
we're
a
lot
of
the
teams
and
folks
who
they're
working
on
it.
E
C
A
All
right
so
I'm,
looking
at
the
the
chart
here
and
you
know,
support
for
ignition,
3,
spec
and
F
cost.
When
is
ignition
3
spec
coming
for
open
shift
itself.
B
C
Know
why
we
wouldn't
target
it
for
4.4,
but
I
think
some
of
this
is
probably
what
would
happen
is
when
4.4
reopens
we
would
or
when
we
branch
for
4.3.
We
would
update
okay
T
to
4.4
anyway,
because
that
would
be
where
those
changes
would
go
and
I
think
like
we
do
need
to
get
to
the
point
where
we
have
a
commitment
for
a
concrete
deadline
to
merge
those
and
Christian
and
Vadim.
That's
something
we
should
probably
sync
with.
C
Is
the
pivot
during
boot
cube
which
would
allow
us
to
get
the
latest
or
to
get
the
correct
version
of
the
MCD
on
the
bootstrap
machine?
That's
probably
to
be
likely
to
be
one
of
the
most
impactful
changes
that
needs
to
be
orchestrated
carefully,
but
once
that's
there,
the
other
things
become
easier.
B
C
Think
it's
just
the
impact
you
know
just.
Does
it
confuse
the
Installer
there's
a
complicate
like
failure,
modes
of
the
Installer
like
if
it
doesn't
come,
if
we
reboot
and
it
doesn't
come
back,
does
the
bootstrap
gather
stuff
work
correctly
and
it
adds
some
time
to
bootstrap,
so
that
could
be
also
be
potentially
problematic.
H
If,
let's
say
working
on
the
assumption
that
maybe
briefly
unfold,
the
four
would
be
on
the
table,
is
it
also
the
the
ignition
provider
for
telephone
also
on
the
hook
to
kind
of
be
maintained
by
someone
or
that
is
going?
And
it's
going
to
be
replaced
by
the
by
the
shim
box
in
the
OpenStack
team
or
trying
to.
H
E
A
Right
then,
we
covered
up
the
okay
default
thing
and
I
know
we
got
you
separated
out
as
as
separate
tasks
here
on
the
agenda.
I
think
you've
covered
off
I'm.
Reviewing
the
engineering
task
Christian,
we
need
to
add
an
engineering
task
for
tracking
lkv
I'm
Catholic,
a
key
engine
ignition
three
in
an
open
shift
itself,
or
are
we
good
just
the
way
it
is.
B
C
A
Mm-Hmm
so,
and
we
did
the
fedora
versus
done
to
west
conversation
already,
though,
the
the
next
thing
that
I
have
on
the
agenda
was
about
resourcing,
okd
and
I
know.
You
mentioned
getting
awareness
on
the
engineering
team
for
the
ignition
free
stuff.
Are
there
other
liaisons
that
we
need
to
set
up
Christian.
A
A
So
my
only
concern
and
I
and
I
do
this
just
out
of
historical
reference
is
when
we
release
something
that
everybody
can
test
and
we
start
getting
a
lot
of
feedback,
hopefully
from
the
from
the
community,
how
we're
gonna
handle
that
input.
Should
we
just
have
people
posting
to
the
okd
working
group,
google
group,
or
do
we
want
to
open
up
sort
of
an
issues
thing
and
github
for
people
to
do
it
so
that
we
can
do
power?
B
B
A
What
I'm,
hoping
is
we're
gonna
have
a
deluge
of
people
testing
this,
and
you
know
me
that
happy
ears,
everybody
tells
me
they're
gonna
test
it,
but
maybe
they
don't,
but
we
got
to
get
the
word
out
and
the
other
thing
that
I
was
thinking
was.
If
we
do
have
this,
we
can
have
office
hours
at
coop,
cone,
perhaps
part
of
that
office
hours.
There
is
I'm
helping
people
get
the
download
up
up
and
running,
or
just
showing
it
and
creating
a
video
or
something
and
then
having
people
around
to
get
that
feedback.
E
It
may
also
make
sense
to
us
when
we
have
an
alpha
ready
to
also
see
if
someone
could
work
in
the
Fedora
magazine
people
to
to
get
post
up
there,
so
that
the
Fedora
community
is
aware
that
we
now
have
a
direct
consumer,
a
fedora
core
OS,
because
that's
a
whole
new.
That's
a
whole
new
thing
that
we
don't
have
right
now
and
it
probably
would
like
a
degree
of
stability
for
Fedora
core
OS.
In
my
view,
because
something
is
already
consuming
it
and
using
it.
A
A
This
is
coming
out
around
koukin
North
America,
so
we
can
and
I
have
it
at
the
gathering
on
so
I
really
want
to
do
a
lightning
talk
on
it
at
the
very
least,
five
minutes,
if
Christians
there
or
someone's
there
and
give
you
give
you
the
podium
to
you
know
tell
people
you're
the
one
slide
that
I'm
supposed
to
make
and
show
people
that
the
alpha
release
is
there
and
tell
them
what
the
roadmap
is
in
five
minutes
or
less.
So
not
just
keep
getting
the
word
out
because
I
keep
getting
asked
the
question.
A
So
that
sounds
sounds
like
we're
almost
at
a
point.
Unless
there's
someone
else
has
snuck
something
another
topic
in
that
we
should
ask
Steve
Hardy
do
maybe
take
over
the
screen
or
a
metal
cube,
update.
A
I
Yeah
there
we
go
so
yeah,
so
yeah
I
saw
the
agenda.
Invite
for
the
previous
meeting,
so
sorry
didn't
actually
make
it
to
that.
That
call
but
yeah
the
Russell,
Brian
and
I
have
been
working
with
a
few
other
folks
from
Red
Hat
on
building
out
the
bare
metal
provisioning
story
for
open
shift,
and
they
were
kind
of
two
sides
to
this.
One
was
to
build
out
an
upstream
community
which
officially
was
called
metal
cube.
I
There
was
a
conflict
with
another
community
project,
so
ended
up
being
renamed
as
metal
cube
with
a
three
on
this
on
the
slide,
and
you
know
that's
been
going
pretty
well,
we
started
this
kind
of
early
this
year
and
we've
got
things
basically
functional
and
working
in
to
end
and
within
the
upstream
community.
There
has
been
interest
from
you
know
several
other
companies
and
in
particular,
within
the
communities
community.
I
More
broadly,
some
folks
from
the
airship
community
I
think
getting
involved
and
are
interested
we
using
it
for
their
and
bare-metal
provisioning,
so
I
just
thought
and
yeah.
This
is
a
good
opportunity
to
give
everyone
a
bit
of
a
heads
up.
I
know
there's
a
few
people
on
the
call
who
are
already
aware
of
what
we
do
not
to.
But
you
know,
there's
been
quite
a
lot
of
progress
over
the
last
few
months
and
things
are
moving
quickly.
I
So
hopefully,
this
is
just
a
chance
to
give
everyone
a
quick
update
and
answer
any
questions
you
may
have
so
yeah
upstream
pieces
was
really
based
around
the
cluster
cluster,
a
machine,
API
integration,
and
so
this
is
obviously
following
the
pattern
that
the
paper
chief
for
install
story
is
much
more
based
around
easing
the
keeping
at
ease
native
interfaces
and
resources
to
manage
their
cluster.
Both
the
initial
deployment
bands
for
data
operations.
I
So
we
looked
at
ways
to
align
with
that
as
closely
as
possible,
but
whilst
also
enabling
automated
management
but
bare
metal,
and
so
obviously
you
still
have
the
option
to
have
user
provided
infrastructure.
But
this
was
much
more
around
providing
like
a
smooth
on-ramp
for
people
who,
just
you
know,
want
to
lay
down
an
open
shift
environment
on,
say,
iraq
of
nodes
and
have
which
works
out
on
the
box
with
a
minimum
of
input
from
them,
and
so
we
basically
created
a
new
home
controller,
which
is
called
the
bare
metal
operator.
I
Maybe
we
should
have
called
that
their
mental
controller
there's
been
some
discussion
about
renaming
that
at
some
point.
But
basically
this
is
an
operator
that
knows
how
to
walk
a
bare-metal
know
through
the
steps
of
deploying
them
and
it's
got
pluggable
backends.
At
the
moment
we
are
just
targeting
a
single
back-end,
which
is
using
ironic.
That's
actually
a
bare
metal
management
project
from
OpenStack.
We
decided
to
reuse
that
primarily
because
it's
got
very
good
vendor
support
the
different
hardware
protocols-
and
you
know
things
like
virtual
media
support
and
both
fish
and
I
p.m.
I
I,
and
we
didn't
want
to
have
to
reinvent
all
of
those.
You
know
on
day
one,
and
it's
also
got
a
very
diverse
community,
eight
behind
it.
So
we've
decided
to
be
that
for
this
at
least
this
first
iteration
and
it's
been
working
out
pretty
well,
but
we
made
a
conscious
decision
at
the
outset
to
make
that
an
internal
implementation
detail.
I
So
now
the
ironic
kind
of
API
details
are
exposed,
so
the
kubernetes
users,
it's
really
just
a
motherhood
choice
that
we
made
to
bootstrap
this
effort,
and
then
talking
of
the
coop
Nettie's
API
is
involved
associated
with
the
bare
metal
controller,
is
a
new
custom
resource
definition
and
that's
a
bare
metal
host,
and
this
is
basically
just
a
representation
of
the
physical
modes.
So
this
is
independent
of
the
machines
and
the
nodes
within
the
cluster
from
the
come
of
the
cluster
view
of
the
deployment.
I
This
is
just
the
bare
metal
inventory
and
here's
where
you
would
specify
things
like
your.
You
know
your
bare
metal,
controller
credentials
and
MAC
addresses.
You
know
the
kinds
of
things
you
need
in
order
to
do
automated
deployment
and,
at
the
moment,
we're
using
pixie
boot,
but
there's
also
work
going
on
to
allow
virtual
media
support
as
well,
which
removes
some
of
the
Ltd
requirements,
and
the
other
piece
of
this
was
the
cluster
API
provider.
I
So
this
is
basically
the
Machine
API
actuator,
which
knows
how
to
match
up
the
machines
as
specified
in
the
machine
set
and
created
at
install
time
with
the
bare
metal
host
objects,
and
so
that
is
basically
the
code
were
using
to
do
that.
And
so
those
are
the
two
big
pieces.
There
are
also
some
other
pieces
which
are
the
container
images
to
make
all
this
possible.
So
we've
got
some
container
images
for
all
of
that
ironic
services
and
also
there's
a
development
scripts.
They
met
the
metal
cue
dead,
em.
I
I
For
so
we've
got
the
bare
metal
operator
and
within
that
same
pod,
there's
actually
a
number
of
containers
to
enable
the
provisioning
service,
which
is
ironic
but
simplicity,
there's
just
one
box
on
here
and
then
you
have
one
or
more
bare
metal
host
objects
and
then
associated
with
that.
We
create
some
data
for
the
config
drive,
which
is
basically
the
ignition
data
I'm
used
for
that
host
and
then
the
credentials
needed
as
well
to
control
the
bare
metal
controller,
and
so
that
ironic
layer
knows
how
to
do
all
of
the
provisioning.
I
And
so
we
can
kind
of
use
that
as
our
black
box
that
bit
dries
the
the
provisioning
of
the
nose.
They
also
have
some
other
nice
features
that
can
do
introspection
of
the
nose,
so
we
can
do
better
validation
before
actually
deploying
to
the
nose
make
sure
you
know
the
expensive
number
of
disks
are
there
and
that
kind
of
thing
it
can
also
do
automated
cleaning
of
all
the
disks
on
decommissioning
of
a
load.
I
So
if
you
decide
to
recycle
a
node
allows
you
to
get
more
safely,
primarily
that's
kind
of
an
interest
in
a
multi-tenant
environment,
but
yeah
there's
a
few
nice
features
that
we
get
for
free
and
by
virtue
of
plugging
that
in
so
this
is
kind
of
a
visualization
of
the
cluster
API
integration.
This
is
going
to
be
familiar
to
anyone.
Who's
worked
on
the
other
platforms
and
yeah,
so
we've
got
my
bare
metal,
specific
butter
controller
and
the
actuator,
and
everything
else
is
basically
driven
via
the
normal
machine,
API
primitives.
I
So
if
you
want
to
add
a
new
worker
node,
you
just
create
a
new
bare
metal
host
objects
that
you've
got
a
spare
host
that
can
be
matched
and
then
scale
up
your
machine
set
and
that's
basically,
the
only
difference
from
my
cloud
providers
is
that
you
need
to
maintain
your
bare
metal
host
objects,
because
no
there's
no
API
to
to
generate
those
as
you
would
have
with
you
know,
AWS
or
one
of
the
other
cloud
providers.
So
that's
kind
of
the
upstream
metal
cubed.
I
Ever
it's
all
based
around
the
cluster
API
side
of
things
and
there's
a
few
links
here.
I
already
mentioned
the
Troi
link,
I
can't
make
you
see
the
docks
and
then
there's
all
the
various
repos
are
under
the
metal,
3
IO
org
on
github.
We
do
also
have
a
good
amount
of
discussion
on
the
upstream
slack
channel
cost
or
API
bare
metal,
and
there
is
an
upstream
Google
group
as
well
metal,
3,
dev
and
so
feel
free
to
to
come
and
reach
out
to
us.
I
There
are
questions,
and
then
the
other
side
of
this,
which
I
was
just
going
to
quickly
go
through,
is
the
the
open
shift
integration
of
this
work,
though,
alongside
the
upstream
work
to
attend
to
enable
a
bare-metal
provisioning,
there's
also
been
an
effort
to
make
use
of
that
work
within
the
open
shift
builds,
and
so
those
of
you
working
on
the
repos
we've
touched
me
well
over,
have
noticed
this.
Over
the
last
few
months,
we've
landed
a
new
bare
metal
platform
which
is
currently
experimental
in
the
open
shift.
Installer,
that's
basically
functional.
I
We
have
still
got
a
few
to-do
items
which
were
in
the
process
of
iterating
through,
but
you
know,
but
aim
is
to
get
back
to
the
point
where
it's
fully
supportive
or
in
the
not-too-distant
future.
The
other
area
which
we
made
some
changes
was
for
this
platform.
The
Machine
config
operator
now
knows
how
to
deploy
I,
hosted,
load,
balancing
solution.
I
So
obviously,
in
the
bare
metal
case
you
haven't,
got
no
load
bars,
frozen
service
or
anything
like
that,
so,
but
that
basically
allows
us
to
deploy
any
proxy
and
keep
ID
in
a
configuration
which
allows
us
to
load
balance
and
failover
between
the
three
master
nodes
for
high
availability.
The
other
thing
which
we
had
to
address
was
the
lack
of
like
a
DNS
as
a
service
and
so
for
internal
DNS
resolution.
We
basically
use
em
DNS
and
core
DNS,
and
so
that
means
that
we
only
have
a
fairly
small
number
of
upstream
DNS
requirements
effectively.
C
I
Yeah
so
I
remember,
we
did
have
a
look
at
that
and
it
didn't
seem
like
something
we
could
just
drop
in
I
mean
effectively
what
we've
done
that
we
had
a
number
of
folks
working
on
this.
If
previously
worked
on
kind
of
production,
OpenStack
environments-
and
in
that
case
we
use
a
GI
proxy
and
keep
alive
the
running
in
containers,
and
so
we
pretty
much
just
replicated
the
same
set
up
in
this
environment.
There's.
I
C
I
Yeah
absolutely
I
mean
wherever
possible,
it
would
be
nice
to
reuse
implementation
across
multiple
platforms.
I
mean
this
actually
is
already
being
reused.
We
think
that
the
DNS
and
the
load
balancing
pieces
are
being
reused
already
for
OpenStack
and
IP
I
bare
metal
yeah,
as
I
said,
if
there's
something
which
we
can
use,
which
will
be
more
applicable
to
other
platforms,
we
can
certainly
have
that
discussion.
I
I
mean
we've
tried
to
make
it
all
fairly
pluggable,
another
Bowman.
It's
just.
We
configuration
a
proxy
and
keep
ID
and
the
DNS
services
by
the
machine
config
operator,
but
yeah.
Certainly
we
could
drop
in
something
else.
If,
if
there's
a
good
alternative.
C
H
The
the
first
one
DG
1
is
the
only
custody
P,
only
a
bare-metal
API
for
the
to
access
the
the
credentials
to
access
access,
the
the
Bell
metal.
You
know
the
IP,
mi
or
I'll,
or
whatever
with
that,
does
it
follow
the
same
pattern
like
in
VMware
provider,
where
you
keep
all
the
secrets
into
CM
config
map.
D
I
A
we
create
a
secret
and
I
reference
that
from
the
bare
metal
host
objects,
I
haven't
actually
looked
at
the
VMware,
but
that's
probably
something
I
can
go
away
and
take
a
look
at
and
then
certainly
I
meant
that
part,
but
I'm
pretty
sure
the
folks
he
did
when
it
looked
at
the
other
platforms,
and
you
know
we
tried
to.
We
tried
to
avoid
a
possible.
H
Tipping
and
one
more
thing
is
like
in
the
past,
I
spoke
with
Russell
and
Doug
and
I'm
just
trying
to
see
whether
the
the
the
fourth
has
changed
recently,
the
ironic
bit
now.
Obviously,
the
ironic
does
a
lot
and
just
pixie
and
suchlike,
which
is
nice
in
one
way,
because
obviously
you
know,
and
he
controls
the
whole
bare
metal,
but
in
some
the
other
side
is
not
okay,
because
maybe
you're
not
getting
the
keys
to
the
whole
chassis.
For
example,
if
you
have
I,
don't
know
Cisco
UCS
and
such
like
big,
big,
big
form.
H
I
Yeah
I
mean
that's,
certainly
a
possibility.
I
mean
the
reason
we
didn't
do
that
at
the
outset
was
you
know
we
were
trying
to
get
things
time
very
quickly,
and
so
we
just
wanted
to
drop
in
the
provisioning
component,
which
you
know
we
several
of
us
had
already
worked
with
ironic
quite
extensively
so
comfortable.
It
was
you
know,
production-ready,
but
that
said,
we
were
also
very
mindful
that
this
kind
of
discussion
was
likely
in
the
future
and
so
within
the
bare
metal
operated.
I
We've
been,
you
know,
quite
careful
not
to
expose
any
ironic
specific
details,
and
so
it
should
be
possible
to
drop
in
and
also
an
implementation,
which
is
you
say,
could
be
like
a
you
know.
Maybe
a
lightweight
alternative,
just
the
kind
of
dev
testing
or
something
or
there
have
been
discussions
upstream
about
you
know
various
other
worlds.
Other
implementations,
it
really
came
down
to
kind
of
the
development
effort
required.
Also
kind
of
overhead,
have
maintaining
multiple
backends
in
the
long
term.
Yet
the
idea
wants
to
make
that
possible
in
the
future.
I
None
of
the
state
is
thought
in
ironic:
it's
all
stored
in
the
in
the
Kuban
that
these
resources,
and
so
basically
the
idea
is
ironic
supposed
to
be
ephemeral.
So
it's
different
to
come
at
the
elephant.
Stack
provider
where
you
know
you're
relying
on
state
in
the
OpenStack
API
is
in
this
case
we
use
the
OpenStack.
Api
is
just
as
a
temporary
step
in
order
to
do
the
provisioning
and
then
ultimately,
the
needs
that
the
state
is
reflected
by
the
key
Nettie's
resource.
The
bare
metal
most
object
in
theory.
H
I
And
we're
not
using
rabbit
and
we're
using
JSON,
RPC
and
I'm
a
broker.
We
do
still
have
a
requirement
on
the
database.
It'd
be
nice
to
get
rid
of
that.
But
again
it's
ephemeral.
You
know
we
just
we
spin
up
Maria,
TV
and
in
the
court,
but
you
know
you
can
kill
the
bottom
need
it
and
then
we
just
recreate
the
okay.
I
I
The
other
thing
I
didn't
mention
about
the
installer
was
that
that
also
uses
ironic
to
provision
the
three
masters
that
we
have
a
special
terraform
provider
and
basically
that's
integrated
inside
the
Installer,
such
that
on
the
bootstrap
VM.
We
start
I
rana
k--
using
the
same
containers
that
get
started
on
the
cluster.
Eventually,
it
would
be
nice
if
you
we
use
the
Machine
API
operator
in
both
places,
but
obviously
with
the
current
implementation.
That's
not
possible!
So
it's
just
it's
just
a
system,
you
service,
but
hopefully
we'll
switch
that
to
using
the
machine.
I
Config
operator
in
the
near
future,
there's
actually
an
open
issue
for
that
and
so
yeah.
The
only
other
major
gap
which
we
were
iterating
on
at
the
moment
is
some
changes
to
the
platform
Status
API.
We
need
to
wire
in
a
few
values
from
the
Installer
to
a
config
map
that
the
ironic
containers
need,
apparently
that
still
created
manually
and
yes,
some
folks
on
the
team
are
working
on
getting
changes
into
the
machine,
API
operator
and
also
the
the
openshift
api
EEMA
to
support
that
so
yeah.
That's
that's!
That's!
Alright!
That's
kind
of
thick!
I
The
current
status
is
functional.
We've
been
evaluating
this
on
the
enzyme
on
bare
metal
and
yeah.
Just
try
to
kind
of
work
on
additional
production
requirements
such
as
looking
into
how
to
enable
single
stack,
ipv6
and
larger
scale
deployments
that
kind
of
thing
but
yeah.
It
certainly
is
at
the
point
where,
if
you're
interested
you
know,
evaluation
is
possible.
With
that
in
mind,
there
is
also
another
set
of
scripts
similar
to
the
ones
for
the
upstream
repos
we're.
I
Basically,
we
have
got
some
scripts
that
will
set
up
at
the
end
based
test
environment
on
a
bare
metal
box
with
a
reasonable
amount
of
RAM.
This
will
set
up
a
number
of
the
ends
for
you,
which
are
basically
dummy
bare
metal,
and
then
we
actually
have
some
services
running
in
containers
that
can
act
as
either
a
dummy
BMC
for
ipmi
or
also
there's
a
dummy
red
fish
support
there
as
well
so
yeah.
This
is
a
quick
way
to
get
a
test
environment
running
end
to
end.
I
You
also
have
the
option
of
deploying
on
bare
metal
using
a
bishop
to
install
correctly,
which
is
the
ultimate
goal
here.
You
know
but
allow
people
to
just
run
out
and
shift
install
exactly
as
you
would
for
any
other
platform,
and
you
just
input
your
your
host
details
and
then
it
deploys
we're
pretty
close
to
that
being
possible.
I
Now,
there's
just
a
couple
of
workarounds
still
in
the
scripts
that
you
know
as
I,
described
above
things
like
the
configuration
resource
or
machine
8
for
the
metal
operator
and
a
few
other
bits
in
the
installer,
but
yeah.
That's
basically
the
current
status.
If
you'd
like
to
find
out
more
there's
a
few
more
links
there,
so
it's
a
bit
hard
to
beat
yeah.
I
There
are
some
Doc's
in
the
Installer,
that's
a
good
entry
point
and
then,
if
you
go
and
look
at
the
you
know,
there's
there's
open
shift
Forks
for
the
bare
metal
operation
of
the
actuator
and
the
same
for
the
ironic
images.
Many
of
these
are
reflected
from
the
metal
cube
upstream
repos,
so
yeah
that
was
all
I,
had
really
I'm.
Just
a
bit
of
a
heads
up
but
I'm
happy
to
answer
any
more
questions
now
or
you
know,
feel
free
to
track
me
down
as
far
as
well.
I
have.
H
I
I'm
pretty
sure
that
has
been
implemented.
I
can't
remember
exactly
what
the
interface
ended
up.
Looking
like,
but
yeah.
We
no
longer
have
that
workaround
in
the
dev
scripts,
so
the
part
of
the
context
of
that
which
might
be
useful
to
originally.
We
were
kind
of
going
after
kind
of
a
hyper-converged
deployment
where
you
ever
certainly
have
quite
a
small
hardware
footprint
and
more
recently,
we've
also
been
looking
at
how
we
would
enable
potentially
much
larger
scale
deployments
where
you
know
perhaps
having
schedulable
master
most
would
be
less
importance.
I
But
yet
the
idea
was
to
support
both
the
kind
of
opinionated
appliance
kind
of
deployment
and
also
they
in
the
future,
potentially
larger
scale
environments.
Perhaps
for
you
know,
telco
usage
and
that
kind
of
thing,
and
obviously
that's
one
of
the
targets
that
the
airship
the
airship
team
have
been
interested
in
as
well.
Yet.
F
I
It's
all
using
Red
Hat
core
OS
at
the
moment,
and
obviously,
as
we've
been
discussing
the
oak
84
roadmap-
and
you
know,
certainly
it
would
be
great
to
validate
that
with
the
community
bills
as
they
become
available
at
the
moment.
The
only
non
Arcos
bit
is
the
RAM
disk
we
use
for
the
deployment,
but
ironic,
that's
still
either
CentOS
file
stream
or
well
based
in
the
downstream
builds
in
the
Vita.
It
might
be
interesting
to
explore
using
more
of
a
kind
of
anarchist
round
disk
approach
there.
But
again
we
kind
of
have
a
around
disk.
F
E
I
Only
in
as
much
as
that,
they're
interested
in
the
bare
metal
provisioning
side
of
it
and
using
the
same
interfaces.
So
you
know
that's
really
part
of
the
the
upstream
community
discussion
and
that's
kind
of
independent
of
the
overshift
integration
that
has
been
going
on,
but
that
the
aim
was
with
the
metal
cube
community.
Was
they
try
and
get
the
broader
keeping
at
ease
community
involved
and
interested?
And
you
know,
over
the
last
few
months
we've
been
seeing
a
few
other
companies
getting
involved
in
committing
folks
to
work
on
there.
I
F
Yeah
as
an
observer,
it
almost
feels
like
these
ecosystems
at
some
point,
are
going
to
morph
and
emerge,
because
kubernetes
is
becoming
more
and
more.
What
OpenStack
does
I
don't
know
if
you
guys
have
any
insight
into
where
we're
this
may
end
up
going
in
the
next
couple
of
years,
but
it
does
feel
like
we're
we're
really
crossing
over
now
we're
both
the
the
the
super
set
of
capabilities
between
the
two
of
them
is,
is
being
more
and
more
absorbed
by
what
kubernetes
does
I'm.
E
Not
sure
that's
a
good
thing,
but
yeah
I'm,
observing
the
same
thing
like
I'm,
actually
slightly
worried
because,
as
as
we
found
out
past
containers
and
things
like
that,
and
we
start
dealing
with
bare
metal
and
virtual
machines
and
all
kinds
of
weird
scheduling
and
telco
grade,
and
all
that
you
know
as
many
buzzwords
as
I
can
throw
at
it.
It's
like
that's
going
to
make
this
obnoxiously
more
difficult
than
it
was
before.
I
mean
as
it
was.
You
know
until
very
recently,
setting
up
a
kubernetes
thing
like
a
very
basic
one.
E
B
You
will
be
working
on
on
I
think
down
become
the
components
that
have
to
go
into
and
okay,
the
install
at
least,
though
you
maybe
will
create
profiles,
will
make
it
configurable
in
a
way.
So
you
don't
have
to
install
everything
so
then
we're
gonna
get
that
in
okay,
B
first
and
then
see
how
that
can
flow
trickle
down
into
OCP
later
on.
But
that's
definitely
thing
that
we
want
to
look
at
later
on
what
once
we
have
the
least
stable,
okay
D
on
F
cause
release.
A
All
right,
then,
if
you
can
throw
the
link
to
this
slide
deck
into
the
notes,
that
would
be
great
Steve
we're
getting
close
to
the
end
of
the
hour.
I
just
wanted
to
make
a
note
that
if
we
do
a
to
keep
to
the
two
weeks,
cadence
that
we're
trying
really
hard
to
keep
two
on
the
next
meeting
would
be
on
November,
12th
and
I.
Think
daylight.
Saving
time
hits
us
up
then
so
just
I'll
send
a
notice
out
to
the
meeting
group.
A
We
did
have
one
more
thing
on
the
agenda
that
we
were
trying
to
sneak
in
I'm,
not
sure
if
we
have
enough
time
dusty
or
if
you
want
to
table
the
packet
conversation
until
the
next
meeting.
D
A
B
The
packet
thing,
so
that
would
be
very
helpful
for
us
I,
don't
know
whether
it's
planned
to
support
our
use
case
in
packet
at
the
moment,
which
would
be
building
or
taking
taking
these
sources
and
building
them
directly
in
in
or
Akagi
instance
automatically.
It's
a
CI
system
that
would
really
help
us
with
getting
getting
stuff
built
in
a
fedora
environment
and
not
relying
on
too
pro
for
that,
but
yeah
I,
don't
know
what
what
the
plan
is
there.
The
timeline
falls
or
any
of
that
just
add.
B
That
would
be
really
helpful
for
us
yeah
getting
too
much
into
this
conversation
is
probably
the
thing
and
yeah.
That's
it
for
me,.
E
C
A
F
Diane's
Chara,
you
had
tagged
me
on
that,
but
you
may
have
tagged
me
on
that
thinking
that
I
was
actively
part
of
that
project.
Yeah
I
had
put
some
notes
up
for
everyone
else,
because
I've
been
consuming
that
project
using
the
single
node
cluster
to
to
build
the
bundle
that
gets
consumed
by
code.
Rated
containers
I
could
talk
to
my
experience
there
with
it
and
some
of
the
some
of
the
gaps
that
I
see
is
you
know
it
being
ready
for
consumption
by
okd,
but
I'm,
not
an
active
member
of
that
project.
A
If
I
can
gnome
or
if
anyone
can
nominate
someone
from
that
group
for
me
to
track
down
for
the
next
call
or
the
call
after
that,
that
would
be
great,
so
I.
Think
for
me,
the
open
issue
that
I'd
like
to
see
done
talked
about
on
the
Google
Group
is
Christian.
Once
we
have
the
alpha
Howie,
you
want
to
manage
the
feedback
on
that.
A
Definitely,
though,
and
I
will
send
a
note
to
the
the
mailing
list
and
ask
you
all
to
let
me
know
if
you're
coming
to
coop
con
as
what,
if
I
post
in
office
hours
I
need
more
than
just
myself
there.
I
can
open
the
door
and
sit
there
and
take
notes
and
try
and
to
play
it.
I
cannot
fully
technically
help
anyone
walk
through
this
stuff
so
but
I
will
definitely
deploy
it.
A
A
A
Yeah,
basically,
I
have
a
cot
that
I
can
roll
out
that
day,
your
your
it's
pretty
crazy,
but
there's
a
lot
of
stuff.
That's
going
on
too,
and
I
want
to
really
use.
If
we
can
get
that
Fedora
magazine
and
a
blog
post
out
I'd
love
to
be
able
to
socialize
all
of
this
to
get
more,
you
know
external
voices
and
make
sure
we
have
I'm
adequate
feedback
and
in
stock,
a
few
more
external
resources
to
work
on
this
with
you
all.
A
A
Yep,
if
you
can
do
that
and
then
you
know
reach
out-
and
you
see
me
if
you
need
me
to
nudge
them-
I'm
sure
they'll
give
us
a
space.
Just
everybody
bring
your
snow
jacket
and
your
snow
boots
and
that's
all
I
have
for
today.
So
the
next
meeting
will
be
at
the
same
UTC's
time,
I'm
pretty
sure
the
time
zones
change,
but
we'll
still
keep
it
at
the
same
one
and
only
justice
schedules.