►
From YouTube: 20191204 - Cluster API Office Hours
Description
20191204 - Cluster API Office Hours
A
Hi
today
is
Wednesday
December
4th
2019.
This
is
the
cluster
API
office
hours.
Meeting
cluster
API
is
a
sub
project
of
state
cluster
lifecycle.
We
do
have
meeting
etiquette
if
you
are
interested
in
discussing
something.
Please
add
your
topic
to
this
agenda
document
and
please
use
the
raise
hand
feature
in
zoom
if
you'd
like
to
talk
and
I
will
do
my
best
to
call
on
you.
A
This
meeting
is
also
being
recorded
and
if
you
haven't
already,
please
add
your
name
to
the
attending
list
and
we
will
go
through
the
agenda,
so
I
don't
see
any
PSAs
or
demos
or
POC.
So
we
can
move
on
to
the
discussion
topics.
There's
one
here
about
hardware,
configuration
validation,
I'm,
not
sure
if
the
person
who
added
this
is
on
are
you
here
by
any
chance.
A
B
Yes,
hello,
so
we've
been
discussing
downstream
about
some
power
options
and
node
maintenance,
and
this
is
kind
of
related
remediation
as
well.
The
first
thing
up
is
rebooting
machines,
so
we
have
some
users,
particularly
their
metal
folks,
that
are
really
interested
as
part
of
the
remediation
story
being
able
to
reboot
a
particular
machine
and
I've
had
some
back-and-forth
about
where
that
functionality
should
live,
but
I
feel
like
the
actual
implementation
of
handling.
B
The
reboot
should
probably
be
centered
around
the
Machine,
and
then
these
avi,
the
machine,
the
infrastructure
provider
and
so
basically
we're
providing
just
an
interface
there
into
the
mechanism
to
be
able
to
do
that.
Rather
than
say.
We
have
this
thing
called
the
bare
metal
operator,
rather
than
writing
our
remediation
system
to
interact
with
that
thing
directly
and
there's
a
couple
different,
like
weird
edge
cases
when
it
comes
to
like
rebooting
like.
B
Having
the
machine
object
have
a
couple
new
fields,
maybe
an
annotation
or
something
in
the
status
field,
and
basically
something
requests
a
reboot.
It
could
be
anything
and
in
the
way
that
it
does,
that
updates
that
particular
feel
on
the
machine
and
it
puts
a
timestamp
of
when
it
requested
the
reboot.
B
B
Otherwise
it
does
nothing
and
that
gives
us
some
functionality
to
let
other
things
kind
of
cooperate.
Let's
say
that
I
have
two
things
that
might
mean
to
reboot
a
node
for
one
purpose
or
another
and
I:
don't
really
want
them.
Racing
on
reboot
idea
that
we
have
well-behaved
clients
to
understand
this
logic,
and
we
can
export
some
methods
that
they
can
just
run
this
and
say
yes,
I
want
to
request
a
reboot
or
not.
B
E
A
Would
you
mind
adding
a
comment
to
the
github
issue
and
then
sounds
like
it's,
maybe
some
good
prior
art
to
take
a
look
at
all
right,
I'm,
gonna,
move
on
to
the
next
one
for
Michael's.
So
if
you
all
have
any
additional
feedback,
please
add
it
to
the
github
issue.
Next
up
we
have
stopping
machines
right.
Yes,.
B
Reaper
visiting
a
host
entirely
from
scratch
is
not
desirable,
especially
if
you're
talking
about
like
a
converged
storage
type
thing
where
there
be
a
lot
of
replication,
etc,
and
so
that's
one
use
case
here.
Another
interesting
use
case
that
I
thought
about
was.
It
would
be
really
nice
to
be
able
to
take
like
a
snapshot
of,
say,
your
master
for
disaster
recovery
purposes,
and
it
would
be
really
cool
and
I
have
some
tooling
that
can
build.
On
top
of
this,
maybe
automate
that
effort.
You
might
need
to
do
that.
B
Maybe
do
some
other
thing
to
machine,
and
this
you
know
there
could
be
multiple
things
that
want
to
use
this
thing
at
once,
but
I
think
what's
nice
about
this
particular
mechanism,
especially
doing
it
on
the
machine,
is
a
you
know:
well-behaved
things
such
as
machine
hell,
checker
talking
about
deleting
machines
and
stuff.
So
when
it
says,
oh
hey,
this
note
is
unhealthy.
B
Before
I
just
go
off
and
delete
this
machine,
I'm
gonna
check
the
running
state
because
if
it's
request
to
be
stopped,
obviously
it's
going
to
be
unhealthy
right,
because
this
is
obviously
something
that
somebody
has
directed
the
machine
to
do
so.
It
serves
as
a
point
of
managing
the
instance
power
state,
but
also
informing
those
other
abstractions
that
are
trying
to
do
things
with
the
machine.
B
And
so
that
brings
up
the
kind
of
next
feature
under
challenges
is
like
some
things.
You
can't
just
stop
or
start
in
instance
our
machine
as
you
can
see
there,
like.
You,
see
two
docks
like
spot
instances,
you
can't
stop
a
spot
and
since
that's
not
a
valid
operation,
and
maybe
there's
some
other
deployment
models,
we're
stopping
this.
You
know
just
not
implement
it
for
one
reason
or
another.
A
G
B
I
think
with
that
at
least
our
particular
bare
metal
use
case
we're
doing
everything
out
of
bands,
so
all
the
power
operations
would
be
out
of
band,
so
I'm,
not
envisioning,
like
a
demon
or
something
that
runs
a
shutdown.
You
know
H
on
the
machine.
Rather
this
this
would
be
out
of
band
operation,
and
that
would
this
would
all
be
implemented
by
the
particular
provider
and
so
be
up
to
the
provider
to
determine
like
the
infrastructure
provider,
how
to
actually
go
about
implementing
how
to
start
and
stop
these
things.
So.
B
A
A
H
So
yeah,
just
an
FYI
we've
been
working
on
the
side
of
sick
cloud
provider,
signal
than
six
storage
on
a
way
to
handle
notes
that
are
shut
down,
or
at
least
in
a
failure
mode.
So
we
like
we
might
want
to
like
the
instant
quit
sicker
provider
and
signaled,
especially
if
something
happens,
improving
in
these
communities
I'm
linking
a
I
kept
that
I'm
working
on
with
Clayton
and
Jim.
So
like
this
might.
If
we
implement
this
into
cluster
API
plus
into
kubernetes
Kuban
Aires,
you
might
run
into
some
like
multiple.
B
B
H
Yeah
in
the
cloud
in
the
cloud
interface,
we
have
right
now
called
that
says
if
an
instance
is
shut
down
or
not
so,
depending
on
the
instance
lifecycle
adopted
by
the
provider.
Some
providers
model
like
a
failed
stage
as
as
a
real
as
the
real
state
in
the
state
machine,
so,
depending
on
the
call
how
the
code
is
implemented,
is
shut
down
by
provider.
Id
would
return
true
if
the
instance
is
stopped
or
failed,
so
it
can
handle
both
technically.
H
I'm
not
sure
I
understood
but
like
what
we
have
right
now
is.
This
is
the
call
at
the
cloud
interface
level,
so
we
basically
where,
whenever
there
is
a
note
that
is
shut
down
or
in
a
failed
stage,
we
add
an
change
into
the
not
object,
but
we
do
not
act
yet
on
it,
because
there
are
a
lot
of
concerns
around
race
conditions
and
the
way
we
did
handle
volumes
and
persistent
volumes.
B
I'm
not
sure
whether
the
race
condition
would
be
coming
from.
It
sounds
like
this
is
mostly
a
read
operation,
so.
H
B
Well,
I
think
that's
tricky.
It's
probably
something
that
we
need
to
account
for
I'm,
not
very
familiar
with
this
particular
Kip.
So
I'll
give
it
a
review.
Thanks.
B
So
this
one
kind
of
directly
related
to
the
previous
two:
they
talked
about
some
infrastructure
providers.
It
might
not
be
desirable
to
reboot
a
particular
host
for
one
reason
or
another,
or
the
functionality
just
isn't
implemented,
so
something
that
would
be
nice
is
so
when
the
actual
infrastructure
object
is
being
created
and
we're
sending
things
in
the
status
field.
B
On
that
object,
we
can
have
like
a
feature
section
and
that's
like
a
section
that
will
be
synced
between
the
infrastructure
object
and
the
Machine
object,
and
that
can
serve
as
an
informer
to
well-behaved
clients
or
to
end
users
to
say,
like
hey.
These
things
are
supported
or
are
not
supported,
and
so
two
of
those
things
are
like
stop
and
reboot.
Obviously,
the
ones
that
were
talked
about
here
and
another
useful
thing
would
be.
A
B
Where
I'm
at
I
would
be
happy
to
develop
some
of
these
things,
but
where
we're
at
internally
is
we
need
some
of
this
functionality,
and
rather
than
do
this
on
our
own
and
in
cluster
API
ends
up
doing
something
completely
different.
I'd
like
to
you
know,
try
to
do
it
here
in.
We
can
all
kind
of
try
it
out
and
make
sure
that
this
makes
sense
as
a
model.
At
least
you
know
for
now,
and
that
way
you
are
not
continually
doing
our
own
thing
and
I
think
that
would
be
better
for
everybody.
A
Yeah
I
totally
agree,
so
we
don't
have
an
official
tilt
file
for
all
of
our
repos
and
if
you're
not
familiar
with
tilt
it's
a
tool
that
makes
it
really
easy
to
make
changes
and
get
them
deployed
fairly
quickly
without
having
to
go
through
the
hassle
of
building
the
images
by
yourself
and
getting
them
deployed.
So
this
I
think
would
be
a
great
candidate
for
a
tilt
type
of
environment,
where
somebody
could
make
the
API
changes
and
the
controller
changes
in
core
cluster
API
plus
at
least
one
infrastructure
probe.
A
And
then
you
could
deploy
everything
together
and
and
try
it
out
and
you
could.
You
could
have
a
series
of
complementary
pull
requests
that
when
you
put
them
all
together,
add
this
functionality,
so
I
think
you
know,
will
I
think
anybody
who's
gotten
feedback
should
definitely
check
out
these
three
issues
and
add
comments
and
I.
Think
it's
it's
totally
worth
trying
to
prototype
and
and
do
those
PRS.
A
H
Yeah
I
added
this
item
just
to
discuss
infrastructure
providers,
support
for
v1
of
the
tree,
especially
the
ones
that
do
not
have
built-in
solutions
for
load
balancers,
so
I've
seen
that
the
control
plane
proposal
requires
a
stable,
API
endpoint
to
be
provided
so
for
Kathy.
What
we
were
the
one
is
that
if
you
do
not
bring
your
own
control
plane
endpoint,
we
will
fall
back
into
the
first
control
plane
IP.
H
A
Well,
when
so
you
seen
when
I
talked
a
couple
weeks
ago
in
San,
Diego
and
I
asked
I
asked
you
do
you
think
I
asked
this
to
mush
as
well?
Is
there
a
situation
where
people
would
not
have
any
sort
of
load
balancer
with
a
stable
end
point
name,
and
would
that
be
something
we'd
want
to
continue
to
support
for
providers
like
can't
be
and
I
thought
we
had
mutually
said?
A
H
What
like
there,
there
are
environments
where
users
won't
have
any
load
balancers,
especially
it
were
given
the
these
first
queue
that
you're
using
or
you
might
run
into
some
environments
that
do
not
provide
load.
Balancing
support.
So
I
know
that
we
at
some
point
we
were
relying.
We
were
fall
back
in
into
if,
like.
If
the
end
point
wasn't
set,
we
were
fall
back
in
into
one
of
the
IPS,
so
what
I
wanted
to
know
is
if
we
could
implement
any
fallback
mechanism
if
the
API
disabled,
API,
endpoint
wasn't
set
so.
A
E
I'm,
just
acting
exactly
what
you're
saying
idea,
which
is
I
was
gonna
mention
keep
alive
D,
which
is
just
a
specific
implementation,
I
think
of
what
you
were
describing
right,
where
you
have
a
it's
a
VIP,
it's
something
that
the
VIP
is
something
that
you
know
before
you
create
or
something
that
you
can
define
reserved
before
you
create
the
cluster.
You
could
provide
that
I.
Think
that's
one
of
the
one
of
the
options
when
there's
not
an
external
load,
balancer
or
some
kind
of
load,
balancer
service
available.
A
Yeah
something
like
that.
You
also
could
have
a
kubernetes
service
that
and
manage
the
endpoints
like
we
can
have
the
a
machine
load,
balancer
implementation
that
manages
the
endpoints
we
could.
We
could
run
H
a
proxy
nginx
envoy,
whatever
on
the
management
cluster
and
manage
its
configuration
based
on
the
IP
addresses
of
the
machines.
I
mean
there
are
options
and
I
think
we
could
come
up
with
one
at
least
one
reference,
implementation
that
is
deployable
on
the
management
cluster
and,
hopefully
would
work.
A
I
know
we
had
talked
about
like
pivot
and
dealing
with
making
sure
that
all
of
the
certs
are
still
good.
If
you,
if
you're
starting
on
a
bootstrap
cluster
and
then
you
pivot
over
and
how
does
the
stable
endpoint
work
and
I,
don't
know
that
we
have
that
solved,
but
I
definitely
think
we
can
come
up
with
some
sort
of
reference
implementation
to
do
this
and
get
rid
of
the
code.
That's
machine
IP,
based
inside
a
cat
B
as
a
fallback.
H
A
J
J
It
spends
exactly
how
you
do
it,
but
the
way
I
brought
it
up.
I
had
a
cluster
running
before
I
needed
that
stable
IP.
So
we
don't
even
even
need
to
do
any
of
this
keep
alive
stuff.
We
can
induce
like
leader
election
or
something,
and
it
would
be
relatively
agnostic
other
than
having
to
support
arc
yeah
or
whatever.
The
correct
networking
word
is.
F
We
just
wanted
to
point
out
the
current
range
of
the
exact
same
problem
in
metal
cube
project
for
the
little
set
up
and
we
were
select
looking
into
having.
We
are
interested
in
participating
in
to
the
getting
a
reference
implementation
together
like,
for
example,
a
typical
F,
G
and
H
epoxy
on
the
node,
because
we
will
anyways
need
to
pivot
from
our
bootstrap
cluster.
That
is
disappearing,
so
we're
working
on
this
epsilon
outside.
So
we
would
like
to
calibrate.
A
Great
so
I
think
I
know,
don't
believe
he's
here
now,
but
mosh
has
been
working
with
you.
Seen
on
some
of
this,
so
I
think
the
my
best
recommendation
would
be
to
sync
up
with
motion
you
seen
and
anybody
else
who's
interested
in
working
on
this
and
Scott.
Some
spare
cycles
generally.
What
we
are
potentially
proposing
for
the
future
beyond
the
one
alpha
three
is
that
there
could
be
a
new
machine
load,
balancer
API
that
is
very
similar
to
how
we
have
the
split
between
between
machines
and
infrastructure
machines.
A
So
if
you
were
on
AWS,
for
example,
and
you're
using
CAPA
the
AWS
provider
right
now,
you're
forced
to
use
an
AWS
elastic
load
balancer,
it's
provision
for
you
there's
no
way
to
turn
it
off.
There's
no
way
to
change
it.
And
if
we
have
machine
load
balancer
as
a
first-class
element,
then
you
could
use
that
out
of
the
box
or
maybe,
if
there's
something
else
you
want
to
use.
A
A
So
there
is
a
a
Google
Doc
that
I
can
go,
dig
up
and
I
know
as
part
of
the
what
I
mentioned
with
motion
you've
seen
before
they're
working
on
doing
a
prototype
for
cat
B
specifically,
and
if
things
look
like
they're
working
fairly
well,
then,
maybe
for
b1
alpha
4
or
whatever
comes
after
alpha
3.
We
could
consider
trying
to
promote
that
as
a
core
API
in
cluster
API.
D
I've
been
recently
working
on
some
webhook
related
stories
and
I
thought
of
writing
like
an
end-to-end
test
to
essentially
ensure
that
everything
was
hooked
up
correctly
and
I
thought
it
occurred
to
me
is
if
I
could
use
the
end-to-end
test
framework
that
Chuck
initially
gonna
proposed
within
cluster
API
now
I
know
the
intention
of
the
a
to
at
ETV
test
framework
was
meant
for,
like
the
providers,
but
I
was
wondering
if
there
were
any
reasons
or
issues
for
using
it
within
capi.
One
thing
I
noticed
also.
D
There
was
quite
a
bit
of
firm
like
duplic,
some
some
duplication
and
helper
functions
and
stuff
like
that
in
capi,
so
I
just
kind
of
wanted
to
get
understanding
from
a
community
is
this
something
that
would
be
useful
to
you
know
have
a
set
of
end-to-end
tests
within
capi
that
uses
the
test
framework
or
you
know,
should
I
just
go
about
creating
get
up
issue
and
have
a
discussion
there
before
I
create
a
PR
for
it,
or
is
it
like
an
anti
goal?
You
know
to
just
even
using
into
an
test
framework.
K
So
I
think
in
general
I.
Definitely
like
the
idea
of
leveraging
the
EDT
test
framework
within
capi
itself,
where
we
can
I
think
the
one
challenge
that
we
might
find
is,
if
anything
actually
requires
some
type
of
a
provider.
We
would
probably
have
to
also
add
in,
like
a
deployment
of
the
example
provider
that
we
used
to
use
in
v1
alpha
I.
D
Cool
yeah
yeah
for
now
I
think
might
only
use
cases
or
the
test
cases
rather
I
had
in
my
head
were
for
just
plus
rating
API,
slowly
I'm,
not
necessarily
any
external
providers
but
I
I'll.
Keep
that
in
mind.
I
do
hope.
So,
what's
the
as
an
actual
item
like
do
I
go
about
creating
a
like
I
sort
of
get
a
you
can
explain
this
or
do
I
just
create
a
PR
or
and
then
have
a
discussion
on
the
PR
itself.
A
A
D
A
All
righty
I
don't
see
any
other
topics
and
before
I
go
on
to
backlog
grooming,
something
I
should
have
done
in
the
beginning.
Just
looking
at
attendees
today,
I
see
some
new
names,
so
no
pressure,
but
if
you're
new
and
you're
interested
in
saying
hi
and
introducing
yourself
I'll,
give
you
all
a
minute
or
two
to
do
so
and
again
no
pressure.
So
if
you're
not
interested
in
saying
hi,
that's
totally
fine.
L
I
H
A
We
didn't
have
any
PSAs
or
demos,
but
in
the
future,
if
there's
anything
that
you
want
to
show
off,
we
certainly
welcome
and
would
love
to
see
that
and
it's
a
very
open
discussion
as
I.
Imagine
you
probably
seen
so
every
time
we
lead
up
to
one
of
our
weekly
meetings,
we'll
have
will
have
an
entry
in
the
agenda
document
and
feel
free
to
add
discussion
topics
beforehand
and
we'll
go
through.
M
A
Alrighty
I'm
gonna
move
on
to
backlog
grooming,
so
this
is
the
part
of
the
meeting
where
we
take
a
look
at
every
open
issue
in
cluster
API.
That
does
not
have
a
milestone
associated
with
it,
and
we
currently
tend
to
use
three
milestones
beyond
the
blank
one.
So
our
current
release
of
close
to
API.
That
is
what
we
tend
to
call
D
1
alpha
2.
A
Is
this
V
0
to
X
milestone,
and
we
use
this
for
anything
where
we
know
that
we
need
to
backcourt
a
bug-fix,
so
we'll
create
an
issue
for
0.24
b
1
alpha
2,
but
we
generally
do
not.
We
don't
do
new
feature
work
in
the
0.2
release
stream.
We
have
V
0
2
3.0
for
our
upcoming
release,
so
everything
that
is
being
developed
on
the
master
right
now
ultimately
will
make
its
way
into
0.3,
and
then
next
means
that
we
have
triage
the
issue.
A
We've
gone
over
it
in
this
call
or
the
author
of
it
or
one
of
the
maintainer
x'
has
reviewed
it
asynchronously
and
decided.
This
is
definitely
not
in
scope
for
the
next
upcoming
minor
release,
and
so
next
basically
means
we've
looked
at
it
and
we've
talked
about
it
or
someone
has
made
the
decision
that
will
deal
with
it
in
the
future
and
at
the
end
of
every
release
cycle,
slash
the
beginning
of
the
upcoming
one.
A
We
do
try
to
do
a
backlog
grooming,
where
we
look
at
everything
all
of
the
open
issues
and
try
and
go
over
them
to
make
sure
that
we
don't
lose
track
of
anything
with
that.
That
said,
I'm
going
to
start
at
the
bottom
and
go
with
the
oldest
open
one.
First
and
I
know
Chuck
is
not
here
today,
but
this
was
about
trying
to
include
files
differently,
based
on
the
the
choice
of
the
file
content
for
the
convinient
bootstrapper.
This
was
something
that
had
been
moved
from
the
old
heavy
K
repo
I.
A
Don't
see
us
doing
this
for
the
next
release,
given
the
amount
of
work
that
we
have
so
I'm
going
to
put
it
in
the
next
milestone
and
we
can
revisit
this
at
the
next
cycle,
and
this
is
an
open
thing
like
I'm
gonna
tend
to
talk
a
lot
because
I'm
just
going
through
these,
but
if
you
feel
strongly
that
I
assign
the
wrong
milestone
or
you
want
to
change
the
priority,
please
feel
free
to
raise
your
hand
and
speak
up.
I.
Don't
want
this
to
just
be
me
deciding
everything
alright.
A
K
B
Yeah
I
left
the
same
comment
in
this,
but
my
concern
is:
if
the
cluster
is
you
know
down,
and
so
it
might
not
be
able
to
respond
to
these
requests
and
would
probably
need
that
clean
up
logic
and
in
the
cluster
API
anyway.
So
I
don't
really
see
a
reason
to
try
to
duplicate
that
effort,
because
we
don't
want
to
leak
resources.
If
the
clusters
down.
A
A
Already
we
have
a
whole
I
think
we
have
a
bunch
of
bootstrap
provider
ones,
although
this
was
this
is
a
request
to
be
able
to
customize
the
cubm
call
specifically
now
some
of
this.
Some
of
these
were
things
like
I
think
that
some
of
the
flags
are
fields
in
the
cube
idiom
configuration
types,
although
it
might
I
think
we
might
need
B,
1,
beta,
2
or
newer
to
get
to
some
of
them.
A
A
Okay,
I
guess
my
proposal
to
move
the
Machine
bootstrap
data
and
not
have
it
as
an
inline
string
and
make
it
as
a
reference
to
a
secret
because
we
store
it
as
a
string
on
the
cube,
ATM
config
for
the
cube,
ATM
bootstrapper,
and
then
it
gets
copied
over
the
machine
and
given
that
it
may
have
some
sensitive
data,
it
should
be
in
a
secret
instead,
and
this
one
would
be
a
breaking
change
from
an
API
perspective.
I
don't
know
that
it
would
really
be
convertible
between
alpha
2
and
alpha
3.
A
K
A
I'd
like
to
take
some
time
and
do
some
brainstorming
and
write
some
comments
on
the
issue
around.
What
would
it
look
like
if
somebody's
got
an
existing
V
1
alpha
2
management
cluster
with
existing
workload
clusters,
and
then
they
upgraded
the
V
1
alpha
3?
What
a
machine
set
continue
to
function?
Would
the
control
plane
well
cube
alien
control,
planes
new,
but
or
you
know
what?
What
sort
of
issues
what
we
have
so
I'm
gonna
stick
this
in
the
milestone.
A
B
N
A
P
P
A
A
A
K
A
A
A
A
A
A
A
A
I
had
a
documentation,
one
on
policy
guidelines
on
core
dependencies
and
tooling
versions,
and
we
we
already
do
have
some
documentation
saying
that
we
need
to
be
careful
when
upgrading
core
runtime
dependencies,
like
controller
runtime
and
client
go
so
I
think
this
is
probably
not
a
huge
amount
of
work,
especially
given
that,
at
least
in
master,
we
now
have
customized
coming
from
the
go
module,
and
so
we
can
pin
that.
So
this
is
one
that
I
was
going
to
work
on
so
I'm
just
gonna.
Stick
this
in
the
milestone.
A
A
P
So
yes
I,
despite
if
anyone
was
at
their
breakfast
in
San
Diego
last
with
the
other
week,
and
then
you
know
death.
Some
people
from
cyber
agent
in
Japan
to
the
streaming
service
produced
a
Japanese
language
book
which
Jason
is
modeling
right
now,
so
they
are
willing
to
boyd
the
japanese
translation,
all
our
documentation,
so
yeah
this
just
an
issued.
You
track
how
we
actually
do
that
and
jason
suggested
separate
subdirectories.
It
looks
like
the
way
it
should
go.
P
A
I'll,
stick
in
the
milestone.
We
can
always
punt
it
for
later
if
needed,
and
that's
all
the
issues
we're
at
the
top
of
the
hour.
So
thanks
everybody
for
attending
and
next
week
we're
gonna
try
and
do
a
discussion
on
the
proposal
process.
The
release
process
see,
if
there's
anything
that
we
can
improve
upon
and
I'll
see
you
all
next
week
thanks.