►
From YouTube: Kubernetes Federation WG sync 20180815
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
B
A
Yep,
that
makes
sense,
I
can
start
with
myself.
So
the
task
that
I
undertook
a
month
or
so
ago
was
to
implement
all
the
scheduling,
types
and
identify
scheduling
types
were
clicker
sets,
deployments
jobs
and
HBA,
so
I
am
somewhere
in
between
implementation
of
the
jobs
as
a
scheduling
type.
A
higher-level
scheduling,
type
and
I
should
be
able
to
finish
that,
probably
this
week,
sometime
I'm
somewhere
implementing
test
cases
for
that.
C
B
So
perhaps
I
can
give
someone
update
about
that
part.
So
currently,
I
was
working
on
two
design
document,
so
the
first
one
that
I
was
working
with
Alex
from
IBM,
and
we
want
to
do
the
product
of
federated
reads
which
can
enable
that
we
can
get
resources
across
different
clusters.
So
cities
don't
the
first
document,
but
currently
we
don't
have
too
much
progress
about
this
one.
B
So,
but
we
hope
that
we
can
have
more
detail
in
this
week
so
that
we
can
get
it
reviewed
in
in
next
week
and
another
product
that
I'm
doing
all
that
I'm
doing
the
federal
way
to
pro
design
and
opposite
the
design
here.
I
hope
that
we
can
have
some
review
today
and
I
already
got
some
review
comments
from
Paul
and
from
Morrow
and
I
also
have
some
discussion
with
your
friend
today,
and
he
also
gives
to
me
some
very
good
comments.
So
I
hope
that
we
can.
We
can
go
through
the
talking
today,
yeah.
A
D
E
B
B
D
A
C
C
C
A
A
C
A
F
Myself
I'm
focused
on
enabling
Federation
Federation
control
plane
to
run
in
an
arbitrary
namespace,
and
the
goal
is
to
a
while
Federation
to
run
in
a
single
namespace
and
target
a
single
namespace.
This
isn't
going
to
be
the
default,
so
this
isn't
going
to
change
behaviour
for
anybody
who's,
expecting
it
to
continue
to
be
a
global
forgiving,
cluster
and
hosting
cluster.
F
But
the
intention
here
is
to
reduce
the
costs
of
trying
it
out.
Potentially,
you
could
use
it
on
the
cluster
that
you
actually
value
and
installing
alpha
software
in
a
namespace
with
restricted
permissions,
giving
it
restricted
permissions
to
target
namespace
seems
like
potentially
a
good
way
to
accomplish.
F
B
F
F
B
F
B
F
Think
that
my
focus
is
on
simply
allowing
the
control
plane
to
be
isolated.
That
opens
up
the
possibility
of
you
know:
a
tenant
in
a
multi-tenant
cluster,
la
Liang
Federation
control
plane,
rather
than
allowing
a
single
control
plane
to
selectively
propagate
to
different
clusters
for
tenant.
I'm,
not
saying
that's,
not
doable,
but
that's
like
that's
complicated
and
just
try
to
make
it.
So
the
people
can
try
this
thing
out
any.
C
So,
just
just
for
my
own
education,
so
is
what
you're
doing
essentially
making
the
namespaces
that
the
various
pieces
of
Federation
control
plane
run
in
configurable
and
and
then
giving
credentials
to
the
control.
The
Federation
control
plane
that
are
restricted
to
a
particular
namespace
is
that
is
that,
basically,
what
you're
doing
yeah.
F
Join
like
the
behavior
wouldn't
be
like
in
terms
of
authorization.
You'd
still
have
to
have
privileges
to
be
able
to
create
necessary
service
account,
and
you
know
the
underlying
cluster.
It
just
wouldn't
go
and
grant
a
cluster
admin
service
account,
which
would
be
potentially
dangerous
if
this
was
a
production
cluster.
While
we're
still
like
an
alpha.
So
by
restricting
to
a
single
namespace,
they
could
give
a
throwaway
name
to
issue
one
where
they
weren't
running,
like
production,
workloads
and
experimental
Federation.
C
Yeah
now
I
totally
understand
the
goal.
I
guess
I'm,
just
not
clear
on.
Maybe
maybe
I
just
need
to
go
and
have
a
look
at
how
it
works.
Today
it
seemed
it
seemed
very
straightforward.
You
would
just
give
it
credentials
that
only
had
access
to
the
namespace
you
wanted
to
have
access
to,
and
then
you
would
have
achieved
that,
but
but
maybe
it's
more
complicated
than
that
I
guess.
C
A
Yeah
yeah
Maru
and
then
there
was
an
alternative
idea
of
like
watch
only
the
particular
namespaces,
like
the
predator,
the
informer
right
now
for
all
the
controllers
right.
It
watches
all
the
namespaces
and
tried
to
propagate
to
all
the
namespaces.
So
maybe
we
could
restrict
it
to
a
certain
new
space
yeah.
We
couldn't
do
it
for
like
multiple
namespaces,
but
a
single
is
we
could
make
it
configurable.
F
So
there
is
part
of
join
or
just
implicitly
because
of
the
know.
The
control
thing
is
running
I,
don't
I,
don't
actually
think
there's
anything
to
preclude
targeting
multiple
namespaces
I
think
that
the
way
informers
work,
because
you
can
just
provide
a
list
of
namespaces
there's
no
target
just
has
to
be
namespaces
that
they
had
access
to
so
first
pass
will
be
the
single
namespace.
If
it
proves
useful,
then
we
can
extend
it,
but
the
goal
initially
is
just
to
get
something
as
simple
as
possible.
When
people
have.
A
F
F
E
H
Most
recently,
the
ete
testing
infrastructure
had
been
sort
of
running
a
duplicate
set
of
tests
with
integration
tests
and
we
needed
to
start
running
against
a
real
cluster
just
so
that
we
can
have
some
of
the
the
actual
cube
controllers
running
in
place.
So
some
of
the
work
they're
just
involved
setting
up
a
mini,
cube
cluster
and
running
a
de
de
tests,
which
we
call
unmanaged
by
providing
a
cube,
config
it'll
actually
run
against
these
mini
cube
cluster
that's
set
up
and
that
mini
cube.
Cluster
relies
on
running
with
without
a
VM
driver
which
effectively.
H
I
looked
into
like
tests
infra
with
their
doctor
and
doctor
solution,
I
had
looked
at
n
cube,
I
had
looked
at
the
cube,
a
DM
doctor
in
docker
cluster
as
well,
and
test
him
from
cube
a
DM
doctor
and
docker
cluster
didn't
really
prove
to
be
mature
enough.
Yet
to
support
to
support
multiple
clusters
recently
cube
ADM.
Dr
indica
cluster
does
support
multiple
clusters,
but
there
are
still
some
gotchas
around
like
networking
and
being
able
to
access
one
cluster
from
the
other.
H
So
right
now
we
have
at
least
a
single
cluster
that
we're
testing
against
in
the
and
it's
part
of
a
CI
job.
When
use.
When
you
create
a
PR
or
submit
any
changes,
it
will
run
the
integration
test
and
the
ete
test
against
that
single
mini
cube
cluster,
so
I
think
that's
good
enough.
For
now-
and
we
will
look
at
revisiting
that
once
there's
better
support
for
multiple
clusters
and
of
course
this
is
multiple
clusters
running.
You
know
as
part
of
some
dev
workflow.
H
This
isn't
trying
to
use
a
public
cloud
provider
infrastructure
or
anything
like
that.
Yet
I
don't
know
if
we'll
get
there,
but
at
least
for
now
this
will
work,
and
so
probably
like
later
this
week
or
early
next
week,
I'll
probably
be
looking
at
taking
on
something
else.
Now
that
the
majority
of
that
work
is
complete,.
C
H
Yeah
that's
the
goal,
but
unfortunately
mini
cube,
running
cluster
outside
of
a
virtual
machine.
It
doesn't
currently
support
more
than
one
cluster
or
profile,
which
is
what
they
call
it.
It's
a
five
that
you
can
provide,
and
so
we're
sort
of
stuck
to
one
cluster
right
now
and
it
may
it
may
be
that
the
testing
for
solution
which
they're
actually
working
on
to
try
and
get
possibly
just
a
doctrine,
docker
solution
and
some
management
capabilities
outside
of
that
it
could
be
the
next
possible
solution.
H
C
Now
that
the
reason
for
my
question
is
from
historical
experience,
yeah
running
real
clusters
is
just
extremely
expensive
and
to
do
proper
testing
you
need
many
of
them
and
sometimes
they
need
many
nodes
in
each
one
etc.
So
this
is
super
valuable,
but
give
at
the
moment,
given
that
we
restricted
to
a
single
cluster,
it
sort
of
restricts
our
ability.
We
are
called
cig,
multi
cluster.
After
all,
and
we
we
need
to
run
tests
that
actually
span
multiple
clusters.
Many
of
them
anyway.
C
Do
you
know
how
do
you
have
a
sense
of
sort
of
half
if
we
just
wait
for
other
people
to
develop
the
support?
How
long
we
going
to
have
to
wait
and
is
it
worth
making
some
sort
of
interim
plan
like
maybe
running,
two
VMs,
each
with
a
multi
with
a
mini
cube
cluster
in
it
I,
don't
know
if
that's
feasible
and
if
it's
easy,
but
you
know
some
sort
of
interim
plan
until
there's
a
better
plan
available.
H
Yeah,
if
we
wanted
to
run
mini
cube,
will
definitely
support
multiple
clusters
in
virtual
machines,
but
we
would
have
to
pick
a
different
CI
infrastructure
to
run
it
on
just
simply
because
Travis
doesn't
support
nested
virtualization.
So
we
only
we're
only
given
one
virtual
machine
and
we
can't
spawn
further
virtual
machines
within
that
virtual
machine
machine.
So
we
have.
H
It's
possible
we
could,
but
we'd
have
to
we'd
have
to
get
some
hooks
in
to
support
yeah
we'd
have
to
get
some
hooks
in,
and
we'd
have
to
have.
Maybe
some
machines
that
are
already
up
and
running
or
we'd
have
to
maybe
yeah
we'd
have
to
wherever
we
spin
them
up,
though,
that
could
also
add
the
cost
right,
like
I,
guess:
you're,
imagining
spaying
something
up
on
like
a
public
cloud
providers,
infrastructure
well.
C
Yeah
I
mean
I,
don't
want
to
create
busy
work.
You
know,
ultimately,
if
the
other
solutions
pan
out
at
some
point
and
then
this
is
wasted.
Efforts
I,
wouldn't
want
to
make
it
a
huge
amount
of
effort,
but
that
also
wouldn't
want
us
to
not
have
multi
cluster
tests
until
some
indeterminate
period
in
the
future.
We've
had
bad
experiences
in
the
past,
waiting
for
other
other
projects
to
get
to
some
level
of
maturity,
which,
in
some
cases
either
never
happens.
H
So
I
I
was
very
eager
to
try
and
get
multiple
clusters
just
simply
for
that
same
reason.
Right.
It's
like
it's,
not
it's
not
a
valuable
super,
valuable
test.
Until
we
have
multiple
clusters
in
place,
the
quick
solution
was
just
to
enable
a
single
cluster
for
now,
just
because
that's
better
than
no
clusters
at
all
and
running
against
just
like
spawn
processes
simulate
clusters,
so
it
would
potentially
have
to
be
involved
more
investigation
and
think
about
what
we
want
to
do.
H
If
we
wanted
a
solution
now,
it
might
be
that
we
end
up
using
something
like
Travis
and
spawning
virtual
machines
and
some
other
cloud
provider.
It
would
probably
be
more
affordable
than
spawning
actual
clusters
which
have
multiple
virtual
machines
backing
them.
So
that
could
be
an
alternative.
We
could
look
at
and.
C
Then,
depending
on
how
big
these
things
I
mean
they
could
be
like
tiny
virtual
machines
which
are
almost
free.
In
fact,
there
are
many
very
small
businesses
in
our
exit,
just
as
an
option
and
just
to
clarify
a
search,
so
we
do
actually
have
continuous
integration.
Multi
cluster
tests,
they're
just
not
running
on
real
clusters,
yet
is
that
the
situation
right
well.
H
C
C
F
Job
yeah,
so
if
we
write
to
to
API,
you
can
see
the
controller
reconcile
between
them.
That's
effectively,
multiple
clusters
running
a
real
cluster.
The
main
reason
that's
valuable
is
that
then
we
have
controller
interaction
that
maybe
we
don't
expect
so
even
the
single
real
cluster.
A
growing
test
against
that
gives
us
some
assurance
that
our
assumptions
in
integration,
land
or
so-called
managed
fixture,
where
they
are
largely
correct.
So
I
agree
with
you
that
we
need
multiple
clusters,
but
we
we
have
kind
of
closing
90%
coverage
between
fake
multiple
clusters
and
one
real
cluster
yeah.
C
I
could
that
that
makes
a
lot
of
sense,
so
we
do
actually
have.
We
are
testing
our
controllers
able
to
handle
more
than
one
cluster
and
but
they're
not
they're,
not
clusters
with
actual
controllers
in
them,
and
then
we
are
testing
that
our
control
claim
can
handle
a
cluster
with
a
real
controller
in
it
and
get
the
feedback.
You
know
whether
its
status
updates
or
whatever
it
needs
to
get
from
the
other
controller.
It's
just
combining
the
two
that
we
don't
quite
have
yet.
Okay.
That
makes
a
lot
of
sense
and
I
agree
with
you.
C
That's
that's
quite
a
long
way
there
that
there
are
a
bunch
of
useful
tests
that
weren't
were
in
that
environment,
but
but
maybe
we
can
live
with
that
for
a
short
while
so
just
to
get
back
to
my
original
question
Ivan,
do
you
have
any
estimate
as
to
how
far
away
we
are
from
having
multiple
real
clusters
if
we
just
sit
and
wait
for
mini
cube
or
whoever
it
is,
that
has
to
do
stuff
to
be
to
get
that
to
work.
Yeah.
H
I
I
can't
provide
like
a
concrete
date
or
anything
I
do
know
that
mini
cube.
Wasn't
is
not
actively
working
on
supporting
that,
but
the
testing
for
folks
I
spoke
with
people
like
Ben
that
are
actually
actively
working
on
getting
a
solution
for
that
and
they're
they're
interested
in
that
both
from
a
performance
standpoint
as
well.
F
A
I,
might
we
were
talking
I
thought
this.
One
more
mechanism
like
we
have
an
option
of
just
local
up.
There
is
a
that's
the
script
name
or
something
like
that
in
k8s,
so
you
download
or
github
Gators
github
repo
you
clone
it,
and
then
you
can
build
it
and
do
our
local
up
better
spawns
the
binaries.
The
control
plane
binary
is
locally,
not
in
docker
I.
Don't
think!
There's
one
inside
docker
containers
also
there.
This
grandness
processes
that
also
effectively
provides
a
real
cluster.
Give
you.
H
A
Direct
mechanism
in
mini
cube,
where
we
can
actually
Raritan
two
different
commitments
to
lives.
Different
clusters,
we're
using
the
same
mechanism
but
K
it
has
provides
a
script
which
can
do
the
similar
thing.
Hi
I'm,
not
sure
they
might
be
conflicts
if
they
are
using
same
ports
or
something
like
that.
But
it's
yeah.
G
H
A
F
F
Ideally,
we
would
be
using
clusters
that
were
deployed
via
some
standard
mechanism
like
mini
cube
or
cube
ATM.
Just
so
we
have
some
assurance
and
we're
keeping
up
with
what
people
are
actually
so
I'm
not
saying
it's
not
workable,
I
think
because
something
as
things
they're
sort
of
by
the
thing,
testing
working
groups
or
whatever
it
is,
and
the
challenge
I
mean
they're,
still
kind
of
there's
an
ongoing
effort
to
try
to
create
infrastructure,
to
create
flusters
some
sort
of
easy
way.
I,
don't
think
it's
really
completed
doing
casa,
yeah.
A
It
has
been
in
process
for
four
years
now.
I
did
not
mean
to
suggest
that
we
maintain
something.
We
do
some
changes
and
maintain
something.
I
was
just
curious.
If
somebody
has
tried
it,
maybe
it
just
works
without
any
conflicts
which
probably
might
be
hi
I
mean
not
not
necessarily
possible
probability,
but
maybe
I'll
give
it
a
try.
If.
F
F
It
doesn't
properly
work
in
a
mini
qvm
because
with
some
networking
thing,
but
the
way
that
we're
running
it
in
like
we're
running
things
currently
in
a
Travis
VM
is
not
a
mini
qvm.
So
it
may
be
that
we
should
just
try
that
and
see
if
we
can
get
it
working
I'm
not
suggesting.
This
is
a
long
term
ultimate
solution,
because
maintenance,
wise
is
not
exactly
my
highest
priority,
but
it
could
be
a
near-term
auction
until
the
other,
better
supported
options
come
all.
A
F
F
C
Yeah
I
agree
with
you:
we've
been
down
this
road
before
with
cube,
feds
and
other
things
to
like
build
build
clusters
and
put
them
all
together,
and
we
don't
want
to
get
into
that
business
I.
Don't
think
just
a
comment
that
I
think
all
of
these
solutions
we've
been
talking
about
recently
are
all
not
real
clusters,
and
then
there
are
all
many
cubes
or
some
simulated
cluster
and
I'm,
not
so
I
think
there's
still
value
in
actually
running
tests
against
real
clusters.
F
Well,
so,
in
terms
of
the
actual,
like
a
qubit
cube,
ATM
deployed
clusters
effectively,
a
real
cluster,
like
the
only
difference,
is
that
the
workloads
are
running
darker
and
darker
instead
of
darker,
but
everything
else
is
basically
exactly
what
you're
gonna
find
in
an
arbitrary
cluster.
There's,
not
a
clouds.
We
would
be
fine.
Okay,.
G
F
C
Totally
agree
yeah,
but
I
can
think
of.
Like
a
dozen
reasons.
You
know
if
you
run
a
job
in
a
cluster
you
you
know,
the
job
has
actually
got
to
run
the
containers
about
a
run
for
the
job
complete
for
you
to
be
able
to
detect
that
it's
completed.
If
you
don't
have
any
containers,
then
you
know
you
can
play
with
the
API
when
you
like,
and
you
want
to
actually
see
anything
right.
F
A
job
yeah
I
mean
that's
the
whole
point
of
mini
Q.
Is
it's
a
single?
It
is
effectively
a
single
node
cluster,
but
it
can
run
works
scheduled
and
crops
nodes.
It
doesn't
really
support,
but
a
president.
We
don't
really
have
any
visibility
without
in
Federation
anymore,
but
that's
that
I'm
not
trying
to
like
yeah.
We
need
to
run
okay.
C
Which,
I
think
is
fine,
but
I
think
we
need
to
at
some
point
improve
on
that,
and
I
would
propose
that
we
don't
even
try
and
go
down
the
route
of
constructing
kubernetes
clusters
for
ourselves
with
or
without
cube,
ATM
or
anything
else.
Simply
because
in
the
past
we've
had
so
much
trouble
getting
that
stuff.
You
know
it
keeps
changing,
and
that
may
be.
H
C
C
C
Run
tests
I'm
not
sure
what
what
physical
infrastructure
Glenn's
on
I
believe
it
runs
across
all
the
public
clouds
and
there's
there's
money
behind
that
and
all
donated
resources,
because
all
the
public
cloud
providers
are
sponsors
and
members
of
the
CN
CF.
So
that's
actually
good.
That's
a
good
Avenue
I
have
I
know
those
guys
guys
quite
well
and
we
could
find
out
from
them
if
they
would
be
prepared
to
run.
D
Guys
and
have
a
similar
is
slightly
different
question.
We
have
discussed
the
testing
with
mini
people,
there's
real
clusters,
but
the
duration
deployment
is
a
distributive
deployment
and
one
of
the
clusters
can
be
deployed
on
this.
The
latency
and
I
stick
today
Bambi,
so
we
have
to
take
in
account.
This
saves
cases
as
well.
C
Yeah
I
mean
that
the
different
degrees
of
testing
you
know
at
the
very
basic
level,
there's
obviously
entrance
to,
or
rather
unit
tests
and
basic
integration
tests
and
then
running
against
any
simple
clusters.
You
know
you
could
dream
up
a
potentially
infinite
number
of
scenarios,
but
one
could
test
I.
Think
the
question
is
figuring
out.
You
know
how
to
do
that
and
which
ones
are
the
most
important
I
mean.
The
same
goes
for
kubernetes
in
turn
tests
they
they
could
also
simulate.
D
C
The
code
for
that
is
actually
an
end-to-end
tests.
I'm
not
saying
that's
the
best
way
of
doing
it
that
it's
very
easy
to
do,
and
it's
been
done
before
so
we
could.
We
could
take
the
same
approach,
so
we
don't
have
to
go
and
boil
the
ocean
and
come
up
with.
You
know:
fancy
ways
to
build
clusters
with
low
latency
or
high
latency
networks,
etc.
We
just
put
them
in
end-to-end
tests.
G
C
Would
encourage
you
to
do
that?
You
can
actually
go
and
have
a
look.
The
tests,
I'm
referring
to
are
called
I,
can
dig
them
out
if
I
knew
it
was
years
ago
that
I
wrote
them,
but
they
go
under
a
category
of
a
fault,
tolerance
or
failure.
Test
failure
case
test
something
like
that.
There
are
a
whole
bunch
of
them
and
they
do
things
like
reboot
nodes
and
disconnect
networks
and
slow
down
networks
and
all
that
kind
of
stuff.
Yeah.
F
A
G
G
A
A
Guys
also
to
have
a
look
at
that,
but
I
still
have
a
feeling
that
there
is
a
good
deal
of
for
detail
which
probably
needs
to
be
added
in
the
document.
Did
you
happen
to
talk
to
Paul
and
Maru,
as
we
discussed
in
the
previous
meeting
about
the
alternatives?
Are
about
the
possibilities
of
implementation?
For
this
not.
B
A
A
B
H
A
C
I
haven't
read
the
document
yet
and
I
will
try
and
do
that
today,
but
just
a
brief
comment
that
I
would
strongly
encourage
you
to
consider
the
model
where
I
mean
in
essence
the
Federation
existing
push
model.
Is
these
people
writing
somebody
writing
stuff
to
the
API
server
and
then
some
controllers
reading
from
the
API,
so
the
confederation
api
server
in
writing
to
a
cluster
api
server
and
those
controllers
are
not
fundamentally
tied
to
running
in
the
Federation
control
plane
they
could
run
in
the
destination
clusters.
C
So
if
you
just
you
know,
I'm
sort
of
waving
my
hands
a
lot
and
I'm
sure
there's
some
details
that
would
need
to
be
figured
out
specifically
round
off
etcetera
but
but
the
basic
model
of
just
run
running
there
I
think
we
called
them
propagators
in
the
design
document,
I'm,
not
sure
what
they
call
them.
The
code
right
now,
I'm
just
running
you
know
one
of
those
in
each
cluster
which
reads
from
the
Federation
API
server
and
write
to
its
local
clusters.
A
A
Yeah
sorry,
there
were
two
documents:
I
had
the
pool,
one
yeah,
gwon,
yeah
I
guess
started
I
mean
he
took
up
a
couple
of
weeks
ago.
He
took
up
the
activity
of
be
proposing
a
design
for
read
access.
We
already
had
a
design
to
some
level,
but
some
amount
of
complexity
in
traditional
v1
and
v2
might,
if
that
might
not
be
applicable
directly,
so
Guana
had
a
link
to
that
and
what
he
took
a
verse
that
he'll
be
with
alex
even
try
to
propose
and
design
with
respect
to
position.
A
We
do
commercial
similar
requirement,
and
one
thing
we
have
already
made
clear
is
that
it's
not
necessary
that
read
access
for
the
others
feature
needs
to
be
implemented
as
part
of
perdition.
V2
API,
you
know
it
can
be
put
under
the
umbrella,
multi
cluster
and
can
be
implemented
as
a
separate
or
or
anything.
So
all
the
alternatives
should
be
done.
Tentative
yes,
make.
C
B
A
Yeah
I'm
gone
yes,
as
I
did
suggest
before
putting
the
idea
in
the
document
it.
It
will
be
really
appreciative
if
you
have
a
chat
with
either
of
us
to
formalize
the
idea,
because
the
doc,
both
the
documents
I
saw
I,
found
some
ideas
to
be
a
little
premature
like
there
already
has
been
some
background
and
some
thought
we
did
put
earlier.
We
had
had
many
discussions
around
similar
topics,
so
you
probably
Italy
makes
sense
for
you
to
have
that
information.
Also
before
actually
jump
into
some
detail
in
the
documentation.
Yeah.
D
D
A
C
Yeah
I
mean
one
one
mode,
that's
actually
fairly
effective
is
to
is
to
put
some
slides
together
of
the
essence
of
your
proposal
and
presented
either
at
this
meeting
or
the
separate
one.
I
think
we
have
quite
a
bit
of
spare
time
at
these
meetings.
If
we
make
it,
we
could
allocate,
you
know
half
an
hour
each
one
of
these
meetings
to
a
design
review
and
make
sure
that
the
person
is
responsible
for
the
design
ahead
of
time.
C
You
know,
knows
they're
going
to
be
presenting
a
slide
deck
of
a
couple
of
slides
and
and
then
we
can
actually
have
concrete
decisions,
because
I
think,
assuming
that
everyone's
ready
documents,
the
whole
and
assuming
that
everyone
knows
what's
in
the
document
is,
is
something
that
doesn't
work
very
successfully.
In
my
experience,
yeah.
C
Yeah
I
think,
ultimately,
you
do
need
to
write
the
document
anyway,
because
not
everyone
can
attend
these
meetings,
so
there
has
to
be
a
record
of
what
your
actual
plan
is,
but
but
rather
than
assuming
that
everybody
has
read
it
and
has
commented
and
responded
to
your
comments
and
blah
blah
blah.
It's
much
higher
bandwidth.
If
you
can
just
do
all
of
that
in
the
meeting
with
slides,
just
a
suggestion.
No.