►
A
A
Okay,
which
makes
me
think
now,
I'm
sharing
my
screen
where
on
earth
is
rey's
hand,
I
don't
know
we'll
work
it
out.
Okay,
next
instruction
is
after
the
meeting
yeah.
If,
hopefully,
everybody's
got
the
agenda
doc.
I
posted
that
in
slack
earlier,
there's
only
a
couple
of
things
on
there
that
I
added,
if
you
have
anything
else
that
you'd
like
to
discuss,
please
add
them
to
the
agenda
doc
and
we'll
get
some
also,
please
add
yourself
to
attending.
A
If
you
are
here,
okay,
jumping
right
in
we
mentioned
a
0.6
release.
Last
time
I
said
I
was
going
to
create
an
issue
to
track
0.6,
which
I
did
in
plenty
of
time,
namely
I
did
it
this
morning
and
I
also.
A
Created
some
issues
which
I
which
are
linked
to
it,
which
I'm
personally
hoping
to
get
into
a
0.6
release
I've
I've
linked
it
there.
Let
me
just
turn
that
into
an
actual
link
same
with
that
one.
A
Yeah,
so
this
is
just
some
things:
I've
I've
added
in
support
for
application
credentials.
I
think
this
one's
really
easy.
I
was
trying
to
mark
this
one
as
a
good
first
issue.
A
I
think
I
think
I
have
screwed
it
up,
because
I
think
I
was
supposed
to
do
that
using
the
bot,
and
so
it
probably
doesn't
have
quite
the
right
metadata,
but
I
think
this
is
this.
One
is
quite
easy.
I'm
interested
to
know
are
any
of
you
guys
interested
in
this
feature.
Are
you
likely
to
use
application
credentials.
B
A
That's
really
interesting
because
I
thought,
because
I
tried
to
use
application
credentials
a
while
back,
and
I
thought
that
when
you
specified
the
project
id
explicitly
in
cloud.yaml
with
an
application,
credential
you've
got
some
scope
error
because
it
says,
like
you,
can't
specify
a
project
id
and
provide
a
scope,
credential
or
something
like
that.
So.
B
A
Nice,
okay,
so
yeah,
I
would
like.
I
would
like
to
fix
that
properly
anyway,
so
yeah
that
that
would
be
good.
I
think
that
was
pretty
easy.
The
other
two
are
somewhat
more
complicated
one
of
them.
I
have
I've
already
started
working
on,
so
the
explicit
resource
tracking
the
status
objects.
I
I've
mentioned
this
elsewhere.
A
A
I
want
to
explicitly
reference
all
of
the
resources
that
we
create,
which
also
should
help
us
to
clean
them
up
more
robustly,
and
this
is
an
api
change,
which
is
why
I'm
specifically
referencing
it
here
so,
interestingly
yeah
down
at
the
bottom
there
I
think
I
mentioned
I
I'm
proposing
to
add
these,
and
in
fact,
in
my
patch,
which
is
still
work.
A
I
I
add
these
to
blue
and
alpha
three
v,
one
alpha
four
and
v,
one
beta
one,
which
seems
like
a
weird
thing
to
do,
but
I
I
think
it
it
simplifies
conversion
because
there's
nothing
to
convert
so
there.
The
problem
is,
if
we
don't
have
it
in
the
in
v1l4,
for
example,
then
we
add
all
this
metadata
and
then
we
throw
it
away
again
and
then
the
next
time
we
reconcile.
A
We
have
to
recreate
it.
So
we're
not
we're
not
getting
the
benefit
of
storing
the
metadata
if
we
throw
it
away
again
for
a
user,
that's
using
an
older
api
version
and
if
the
change
is
compatible
and
in
fact
identical.
A
C
D
Field
yet,
and
and
therefore
it
might
be
okay
to
edit
in
previous
versions
too,.
A
So
yeah
no
changes
to
the
spec,
it's
not
exposing
any
additional
functionality.
Now,
interestingly,
I
mentioned.
I
mentioned
this
to
a
colleague
of
mine
at
red
hat,
and
he
said
he
has
seen
other
other
cluster
api
api
patches,
which
adds
the
later
version
object
as
an
annotation
on
the
older
version
and
honestly
that
sounds
disgusting
to
me,
I
don't
understand
why
you
would
do
that.
A
However,
if
if
that
was
some
kind
of
convention,
I
would
like
to
understand
why
it
is
done
that
way
because
yeah,
even
disgusting
things,
if
you
look
hard
enough,
it
turns
out
there's
normally
a
reason
for
it,
so
that
that
might
be
something
to
look
into.
A
A
So
I
know
I
know
our
downstream.
In
fact,
I
think
they're
not
doing
this
and
yeah.
I
think
they're
not
doing
this.
The
and
I
know
our
downstream
v-sphere
provider,
which
is
probably
working
broadly
the
same
as
the
upstream
vsphere
provider
is
not
doing
this
and
is
suffering
problems
in
our
ci
with
namespace
overlap,
which
so
I
I
would
recommend
this
in
general.
A
To
be
honest,
if
you
have
a
unique
reference
to
a
thing
that
you
created,
why
would
you
not
store
that,
rather
than
try
to
infer
the
thing
that
you
created
using
heuristics.
D
Does
this
somehow
couple
the
implementation
even
more
to
the
openstack
api?
Let's
say
something:
changes
within
the
openstack
api:
it's
like
a
harder
for
us
to
keep
track
of
of
those
changes
comparing
to
what
we
are
doing
right
now,.
D
A
But
it
would
be
something
to
look
out
for
the,
because
you
wouldn't
store
something
in
resources
unless
you
created
it,
so
it
would,
it
would
have
to
be
a
new
type
of
resource
and
the
only
and
the
only
identifier
we're
putting
in
there
is
the
uuid,
so
the
it
would
have
to
be
a
significant
architectural
change.
I
think,
to
to
invalidate.
A
Even
in
10
years
time,
you're
going
to
want
to
delete
a
port
and
the
port
isn't
going
to
become
something
else.
I
don't
think
or
if
it
has
it's
going
to
have
the
same
identifier.
A
Anyway,
sorry,
I'm
going
too
far
into
the
weeds
on
that
one,
I'm
just
bringing
it
up
as
as
something
I'm
interested
in,
which
is
an
api
change.
In
this
context,
the
and
the
other
one
is
something
I
may
have
mentioned
before,
which
is
yeah
prevention
of
reconciliation
of
machines
within
valid
credentials.
So
this
is
bizarrely.
A
We
have
seen
this
at
least
twice
that
I
know
of
where
somebody
has
updated
their
cluster
with
valid
credentials
for
the
wrong
cloud
and
which
causes
all
of
your
machines
to
be
marked
failed
because
we
go
and
look
up
the
the
resource
and
of
course
it
doesn't
exist.
A
So
you
don't
you
don't
get
an
error
response
from
the
api.
You
get
a
definite.
This
doesn't
exist
which,
of
course
it
doesn't
because
it's
the
wrong
cloud
and
yeah
in
one
case
yeah.
The
reason
was
they'd
put
their
staging
credentials
in
there
in
production
and
in
the
other
one.
I
think
I
think
this
was
qe
actually
so
things
that
they
do
are
always
a
bit
odd.
I
think
they'd
change
the
region
and
the
resources
didn't
exist
in
the
new
region.
A
Yeah.
So
essentially
my
thought
is:
can
we
store
something
in
the
status
which
unambiguously
identifies
the
cloud
that
the
resources
were
created
in
and
therefore,
when
we
start
to
reconcile,
if
we
authenticate
and
see
that
the
cloud
that
we're
connected
to
is
the
wrong
cloud,
we
could
just
stop
and
return
a
useful
error,
and
I
think
project
id
would
do
that.
A
Although
I
also
I'd
have
to
check,
I
think
project
id
would
be
common
across
regions,
so
so
we
might
need
region
and
project
id,
but
anyway
that's
something
else
that
I'm
thinking
about
and
the
last
one
is.
It
is
a
bit
of
a
is
a
bit
of
a
mappo
thing,
a
bit
of
a
downstream
thing.
A
It
would
be
very
nice
if
I
could
tweak
the
machine
creation
internal
api,
so
this
is
something
that
came
out
of
the
work
I
did
on
the
the
reconciliation
stuff
and
the
and
the
resource
tracking.
I
I
always
found
it
weird
that
we
did
so.
We've
got
create
instance,
and
then
we
did
a
little
bit
of
pre-computation
on
create
instance,
and
then
we
call
create
instance,
for
the
small
c
and
and
create
bastion
was.
A
Was
doing
something
similar
so
in
in
my
patch,
and
I
may
pull
this
patch
out
and
create
a
separate
pr
out
of
it.
It
makes
that
slightly
cleaner,
where
we
essentially
yeah.
We
make
the
yeah
we
make
that
that
that
pre,
you
know
refactoring
step
call
it.
A
So
you
turn
a
machine
into
an
instance
deck
and
you
turn
a
bastion
into
an
instant
spec,
and
then
you
call
create
instant
spec,
which
is
essentially
what
the
current
code
is
doing,
except
it's
not
entirely
obvious
because
of
the
way
it's
written,
and
I
want
to.
I
want
to
make
that
slightly
more
explicit
and
yeah,
because
it
makes
the
code
a
little
bit
cleaner
and
what
what
we're
doing
downstream
is
is
we're
essentially
we're
not
using
any
of
the
upstream
api
objects.
We've
got
our
own
api
objects
and
our
own
api.
A
So
we
would.
We
have
to
do
our
own
refactor
step
and
it
would
be
nice
if
what
we
we
could
just
do
the
same
as
openstack
machine
and
bastion
are
already
doing,
and
then
just
call
the
the
koreans
inspect
code
yeah,
which
would
be,
which
would
be
very
useful
anyway.
That's
that's
three
things.
Does
anybody
have
thoughts
on
a
0.6
in
general?
A
Any
api
changes
that
anybody
else
wants
to
get
in
there
bug
fixes
my
time
scale
for
this
I'm
thinking
is,
I
mean
the
things
that
I've
written
down.
There
are
quite
a
lot
of
work,
so
I
mean
probably
a
month
or
two.
D
Sounds
all
really
reasonable
and
yeah
highly
appreciated.
Also
the
effort
you
put
in-
maybe
I
maybe
I
can
finish
or
even
create.
I
haven't
created
it
yet
the
pull
request
to
mitigate
this
error,
state
of
the
openstack
machines,
where
we
set
the
failure,
reason
and
failure
message:
okay,.
B
D
A
Oh
that
that
reminds
me
in
my
in
my
bastion
spec,
which
is
the
one
pl
which
I
may,
which
I
may
pull
out
and
make
a
separate
pr.
I've
removed
all
of
the
failure
conditions
related
to
bastion,
so
there
are
now
no
failure
conditions
related
to
the
bastion
host,
so
their
cluster
will
still
come
up
and
be
marked
ready.
A
Even
if
the
bastion
fails
so
yeah.
The
only
thing
that
would
be
failed
is
the
bastion
but
yeah
the
yeah
machines
in
general
yeah.
The
failure
state
is
just
generally
problematic.
D
State,
so
what
we
did
downstream
for
now
was
to
remove
most
of
the
calls
to
set
this
failure
message
and
just
set
it
in
very
rare
cases,
like
maybe
it's
in
the
issue,
but
maybe
open
state
machine.
So
the
the
actual
virtual
machine
fails.
D
C
What
I
just
want
to
say,
I
think
we
have
a
kind
of
bug
in
the
openstack
machine
status,
because
there,
the
failure,
message
and
reason
is
called
error
message
and
every
reason
I
think
we
we
should
align
that
to
maybe
in
the
same
pr
sean
was
already
mentioned.
C
D
That
would
require
some
kind
of
what
is
it
api
change.
A
That
would
know
that
gaming
works.
That
would
definitely
be
an
api
change,
but
at
the
same
time
I
mean
we're
talking
about
going
from
v,
one
alpha,
one
alpha
four
to
v,
one
beta
one,
I'm
okay
with
that.
As
long
as
we
do
the
conversion
correctly,
which
would
be
very
easy,
so
I
I
I
think
that
would
be
okay.
A
Well,
by
all
means
by
all
means,
link
things
to
the
to
the
0.6
pr,
but
that
that's
just
sort
of
a
dumping
ground
for
for
stuff
that
might
get
forgotten.
So
if,
if
you
have
a
pr
that's
going
to
land
in
the
next
week,
then
yeah
link
it
to
the
0.6.
If
you
want.
A
Yeah,
I'm
definitely
in
favor
of
that
anyway,
it
sounds
like
everybody
is.
I
think
everybody's
failed
with
fed
up
with
the
failed
state,
yeah,
the
even
even
for
arid
machines.
I
wouldn't
necessarily
set
it
for
errored
machines,
either.
A
D
So
a
faulty
openstack
machine
template,
I
think,
could
lead
to
a
failed
machine.
A
Yeah,
but
I
mean
if,
for
example,
so
the
the
the
example
I
think
of
is
what
what,
if
you
specified
the
wrong
image,
then
you
should
go
into
the
failed
state,
but
what
if
it's
not
the
wrong
image,
you
just
haven't
uploaded
it.
Yet
so
should
you
keep
trying
in
case
it
becomes
the
right
image.
A
A
So
if
it
fails
to
create
the
instance,
should
it
try
again
if
it
fails
to
create
the
instance
because
of
an
invalid
parameter,
which
is
something
that
we
can
report
so
yeah
the
yeah,
the
the
openstack
api
will
tell
us,
you
know
not
worth
retrying
that
do
we
retry
anyway
in
case
the
parameters
become
valid.
I'm
I'm
inclined
to
say
no,
because
we
typically
wait.
A
We
have
the
controller
wait
until
resources
are
available
before
we
try
to
create
the
instance,
so
it's
more
likely
that
the
user
screwed
up
and
they
would
want
to
know
about
that.
Then
then
they're
doing
something
odd
yeah.
So
maybe
failure
to
create
should
be
a
failure,
but
I
don't
think
the
error
state
should
be
a
failure
except
on
creation.
Oh
my
word:
that's
complicated
yeah.
D
D
C
A
A
I
don't
know
what
to
do
about
that
yeah,
but
we
should
at
some
point
have
a
plan
to
groom
those
on
a
regular
basis.
How
regular.
A
D
Oh
you
mean
you
could
also
have
a
separate
meeting
right
for
that
as.
A
Yeah
yeah
I
sat
in
here
it
was
interesting
by
the
way
I
mean,
if
you
can
make
that
meeting,
I
I
would
just
because
it's
an
interesting,
you
know.
A
Well,
you
can
see
quite
a
lot
of
the
things
that
they're
thinking
about
in
a
minute
just
by
looking
at
their
bug
battle.
A
A
A
Do
we
have
anything
else
aob
by
the
way
is
that
is
that
a
british
thing,
a
aob
any
other
business,
never
seen
it
before?
Okay,
it's
it's
a
very
common
thing:
yeah!
It's
it's
the
last
item
on
the
agenda.
It's
and
such
and
such
had
a
baby
would
be.
I
would
would
get
added
to
the
agenda
on
on
aob
or
yeah.
D
Anyway,
like
michelin,
yes,
yeah,
okay,
yeah.
A
Yeah,
okay,
I
think
we're
done
no
real.
What
actions
did
we
have
out
of
that?
We
had
sean's
gonna,
create
a
pr.
A
For
machine
failure
stuff,
what
was
the
other
pr?
We
were
talking
about.
A
A
And
because
that's
because
that's
an
api
change,
we
should
track
that
in
the
0.6.
A
D
Looking
forward
to
your
ideas
with
that
resource,
stuff,.
A
I'll
I'll
start
posting,
like
I
said
I
might,
I
might
split
it
up
so
so
I
might
split
up
the
patch
where,
where
I
refracted,
how
bastion
and
and
the
machine
controller
called
create
instance,
I
might
I
may
post
that
separately,
because
it
was
starting
to
go
into
an
xxx,
xxxl
patch,
so
which
is
probably
getting
a
bit
hard
to
review
so
yeah.
I
think
I'll
break
it
up.
A
Okay,
thank
you
very
much
and
see
you
on
slack
and
see
you
in
a
couple
of
weeks.
Yep
sounds
good.