►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 01 July 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
I
think
that
you
could
oh
thank
you
shane.
We
could
probably
extract
any
region
information
necessary
based
on
the
topology
I'm
guessing,
but
but
I
I
am
not
enough
of
an
expert
on
s3
to
know
all
of
the
situations
where
the
region
really
matters.
B
Okay
reason
really
matters
because
having
traffic,
so
everything
is
over
the
network.
You
can
still
access
buckets
in
another
region,
but
one
of
the
main
reason
reasons
to
talk
to
a
specific
region
is
because
one
is
if
you're
geographically
distributed,
and
you
want
to
talk
to
the
closest
region
or
two
is
for
cost,
where
the
cost
of
sending
traffic
sending
large
amounts
of
network
traffic
over
you
know
across
regions
is
expensive.
I
think
I
think
it's
like
10
cents,
a
gb.
D
I
mean
also,
if
the
admin
cares
about
it,
it
can
set
it,
and
also
the
driver
can
can
push
that
to
the
downwards
api
so
that
the
sdks
will
will
use
the
correct
one.
But
the
question
was
about
the
bucket
request,
mostly
right.
A
D
It's
more
than
that,
it's
also
the
networking
like
this
is
how
you
choose
the
the
actual
end
point,
for
example,
on
aws
right.
This.
D
D
D
Like
a
class
like,
in
that
sense,
even
right,
even
even
the
class
could
represent
like
local
region
versus
remote
or
some
you
know
whatever
the
administrator
has
has
in
mind
for
a
topology
tagging
right.
A
D
Right
so
that
that
that's
the
analogy
for
the
cozy
driver
sending
the
region
to
the
workload.
Basically,
you
know
hooking
up
things
correctly
like
in
the
pvc
world,
where
you
put
the
the
pv
based
on
topology,
and
then
you
also
put
the
pod
schedule
schedule
on
the
same
node
and
things
like
that
right.
So.
D
Right,
it
doesn't
require
like
and
that's
the
question
of,
do
we
see
a
user
or
maybe
the
question
is,
if
how
burning
it
is
for
us
to
provide
this
capability
to
users
at
this
point
right,
we
can
always
add
it,
but
like
how
much
do
we
really
feel
like
the
api
is
missing
without
it.
B
C
A
The
the
the
driver
does
know
where
it
is
and
the
driver
does
get
to
specify
the
end
point
that
the
workload
should
talk
to,
and
so,
if
there
are
regional
considerations
there,
they
should
already
be
built
into
the
endpoint
that
you
send
and
then,
if,
for
whatever
reason,
you
also
want
to
include
a
region
for
the
purposes
of
the
s3
signature
header,
you
could
have
that
string,
get
sort
of
pushed
down
to
the
downward
api,
but
all
of
that
you
know
is
is
from
the
driver
back
out
on
the
way
in
which
is
what
the
bucket
request
is.
A
A
Yeah
yeah,
like
the
concept
of
regions
and
zones,
have
to
make
sense
in
the
cluster
itself
yeah
and
so
and
they'll
have
different
definitions.
If
you're
running
like
a
gke
cluster
they're
going
to
be
google
regions
in
google
zones,
if
you're
running
an
amazon,
I
presume
they're
amazon
regions
and
amazon
zones
and
then.
E
A
B
C
A
No,
that
that's
not
the
case,
it's
it's
actually
at
the
controller
levels.
Where
all
this
happens.
So
when
you
do
create
volume,
there's
an
optional
capability,
I
mean
you
know:
drivers
don't.
A
You
then,
at
create
volume
time
you
can
pass
in
a
topology
constraints,
which
can
include
a
like
hard
constraints
and
soft
constraints
and.
B
No,
it's
the
other
way.
So
so
when,
when
no
driver
initialization
happens,
it's
the
node
server
initialization
happens,
it
returns
the
topology
and
kubernetes.
Knowing
the
topology
up
front
to
the
create
volume,
gives
you
preferred
and
required
list,
and
you
have
to
choose
one
like
from
from.
B
A
A
B
A
Have
gone
so
to
the
extent
that
they
have
matching
of
topology
within
workload
and
storage.
It's
typically,
you
set
your
pc
to
bind
on
first
use,
and
then
you
create
a
pod.
You
allow
the
pod
to
get
scheduled
somewhere
and
then
kubernetes
arranges
for
the
pvc
to
be
created
with
a
topology
that
matches
where
the
pod
already
got
put
if
possible,
and
then
that
allows
you
to
arrange
for
storage
and
compute
to
be
close
together.
A
B
And-
and
I
think,
even
though
that's
not
true
no
hold
on
so
I'm
look.
Okay,
let
me
just
share
my
screen.
What
I'm
looking
at
is
not
get
info
not
get
in
for
the
in
the
response.
You
get
the
accessible
topology
that
is
on
that
node.
What's
the
topology
there.
B
And
and
it's
you
can
define
it
as
whatever
you
can
use
whatever
keys
you
want.
This
is
not
kubernetes
centric.
This
is
the
csi
driver,
saying
this
is
my
topology,
because
the
driver
is
the
only
one
that
you
know
what
the
topologies
driver
might
talk
to
kubernetes
or
might
talk
to
an
external
system
to
find
out
what
it
should
set
as
the
value,
but
but
the
driver
is
the
one
that
advertises
where
it's
running.
B
A
B
A
Hold
on
my
id
is
slow,
I'm
just
I'm
just
trying
to
look
at
this
this
code
in
one
of
my
drivers
and
see
what
happens.
Oh,
come
on
yeah,
it's
just
not
fast.
It's
updating.
B
Because
if
we
don't
know
how
to
say
the
these
are
the
notes
which
belong
to
say
region,
one
and-
and
you
know,
because
we're
going
to
region,
one
bucket
should
be
in
region.
One.
D
A
B
A
Yeah
yeah,
so
I'm
trying
to
figure
out
how
this
works.
If,
okay,
do
you
want
to
share
your
screen.
A
A
B
B
The
backend
it
it
talks
to,
let's
say
on
aws
it
talks
to
aws
metadata
service
and
finds
out.
What's
the
what's
the
current
availability
zone,
for
instance,
because
you
know
you
can't
move
ebs
volumes
across
availability
zones
so.
A
But
if,
if
this
is,
if
this
is
node
specific
yep,
then
how
how
does
the
plug-in
know
which
node
it's
running
on
right
now
like?
Where
does
it
get.
B
A
B
A
Is
the
kubernetes
pod
that
is,
that
comprises
the
node
plug-in,
tells
the
node
plug-in
which
kubernetes
topology
it's
in
yeah,
and
then
you
just
return
that
information
right
back
from
no
git
info
right
when
anything.
So
so
it's
kind
of
weird
it's
just
like
this
strange
loopback.
Where
kubernetes
tells
you
and
then
you
tell
kubernetes
right.
B
B
A
E
A
A
C
B
A
Like
I,
I
have
to
imagine
that
what
it
is
is
there's
some
notion
of,
like
some
plug-ins
know
that
either
they
can
attach
to
all
topologies
or
only
a
limited
number
of
topologies
and
like
that's
where
the
difference
has
to
come
in,
because
some
some
node
plug-in
is
just
going
to
say.
I
don't
I
don't
care
where
the
storage
is
like.
A
I
can
just
mount
it
over
the
network,
whereas
other
ones
are
going
to
say:
hey,
I'm
a
local
disk
plug-in
like
if
the
disk
isn't
on
this
node
like
I
can't
attach
to
it,
because
I
only
talk
sata
or
you
know,
scuzzy
or
whatever.
I
don't
talk
over
the
network
so
like
for
those
types
of
plug-ins
attaching.
D
Right,
I
think
this
is
the
reason
why
you
need
to
tell
kubernetes,
although
you
get
the
information
from
kubernetes,
but
every
every
driver
would
would
like.
No,
no
would
would
parse
it
differently
right,
based
on
its
maybe
yeah
right.
A
I'm
not
I'm
trying
to
figure
out
how
this
is
how
this
works
in
practice,
because
you
know
for
for
the
types
of
csi
plugins
I'm
familiar
with
like
iscsi
and
nfs.
It
doesn't
matter
where
it
is
we're
going
to
talk
to
it
over
the
network.
I
mean
it,
you
may
want
it
to
be
closer
for
performance
reasons
or
cost
reasons,
but
like
if
it's
really
far
away,
it
can
still
work.
A
It's
just
going
to
be
slower
and
more
expensive,
whereas
with
other
drivers,
I'm
sure
that
it's
make
or
break
it's
like
if
you're
in
the
wrong
topology.
You
can't
you
can't
use
this
volume
period
right
right,
but
when
it
comes
to
object,
storage,
I
think
it's
all
the
former
right,
there's
never
going
to
be
a
situation
where
it's
make
or
break
it's
just
gonna,
be
considerations
about
cost
and
performance.
D
I
I
wouldn't
be
like
completely
sure
about
it.
Just
because
you
know
networking
might
also
be,
you
know,
doesn't
have
to
be
fully
connected
right.
It's
not
that
you
can
always
get
to
any
service.
A
D
D
No,
I'm
not
I'm
not
going
that
far.
I'm
just
thinking
about
this,
this,
the
external
service
for
that
matter,
like
that
you
you
might
want
to
have
just
several
nodes
being
configured
with
you
know
the
special
networking
needed
right
for
that,
for
example,.
B
B
Right
only
within
the
rack,
for
instance,
so
so
we
we
might
have
something
similar
with
this
too,
I
mean
a
good
example.
Is
cost
could
be
prohibitive
because
of
the
scale
of
data
to
go
across
the
region.
So
you
have
a
strong,
hard
constraint
that
you
can
only
you
know,
schedule
pods
in
the
same
region
for
for
for
a
particular
bucket.
C
A
C
Information
that's
relevant
to
cozy,
or
we
could
say
we
use
node
labels.
A
To
we
don't
have
a
node
component
for
cozy
so
like
whatever
we're
doing,
is
going
to
have
to
be
either
part
of
cubelet
or
or
something
you
do
at
your
node
setup
time.
B
Right
so
you
could
say
for
for
a
particular
driver,
or
you
know
you
could
come
up
with
well-defined
labels.
If
it's
set
on
the
node,
then
we
would
be
able
to
restrict
topology
based
on
those
labels
like
but
isn't.
B
A
B
Matter
right
for
our
discussion,
yeah,
so
we
could
just
say
we
won't
advertise
it.
We
won't
and
we
just
we
just
go
off
of
well-defined
labels.
If,
if,
if
a
label
starts
with
object,
storage
or
ksi
or
cosy.I
or
something
slash,
you
know
key
equals
value,
then
we
use.
We
use
that
as
a
topology
constraint,.
A
B
C
C
B
B
How
to
put
the
pod
close
to
that?
Oh
we'll,
we'll
need
a
component
which
I
mean
we'll
need
we'll
need
a
way
where
the
scheduling
takes
input
from
here.
D
But
but
the
driver
I
mean,
what
are
we
trying
to
solve
here?
The
driver
is,
is
the
one
doing
the
allocation
and
the
driver
can
check
for
you
know
the
cluster
location.
If
that's
the
desire
or
any
configuration
for
the
driver
that
will
set
my
preferred
region
or
my
you
know,
topology,
whatever
a
driver
specific
way
of
doing
that,
and
then
the
driver
can
can
allocate
the
buckets.
However,
it
feels
like.
But
what
are
we
really
trying
to
to
answer
here
about
the
work?
The
the.
A
D
Has
to
be
certain
values,
so
in
cozy
also,
the
pod
will
not
start
will
not
schedule
until
the
node
adapter
mounts
the
volume.
Basically.
A
A
B
Stuck
there
yeah
yeah,
no,
but
in
case
of
cozy,
what
we're
saying
is,
let's
say:
there's
a
kubernetes
cluster
that
that's
across
two
regions
and-
and
you
want
to
you
know
when
you
create
a
bucket,
the
driver
says
I've
created
in
region
one,
and
so
you
want
a
pod
that
that's
going
to
use
that
bucket
to
also
be
in
region
one
now
we
need
a
common
language
between
between
what's
responded
by
the
driver,
so
the
driver
needs
to
tell
us
in
some
way
that
that
this
is
re.
B
A
B
B
C
B
Our
audit
can
be,
it
can
be
values
that
the
drivers
already
assumed
to
understand.
So
so
the
driver
could
say.
I
understand
labels,
starting
with
this
prefix.
B
A
B
Yeah
and
and
then
and
then
yeah
and
and
then
the
driver
responds
with
you
know,
while
scheduling
the
driver
would
have
to
respond,
saying
that
this
bucket
belongs
in
this
region
and
then
and
then
it
gets
scheduled
to
the
you
know
that
node.
B
D
So
so
so
we
are,
we
want
what
we're
trying
to
do
now
is
to
have
cozy
effect,
pod
scheduling
in
order
to
optimize
the
the
pod
location
for
the
bucket
right.
Exactly.
D
D
It's
a
must
right,
I
agree,
but
yeah
do
you
guys
feel
like
it's?
It's
a
must
for
for
the
api
at
this
stage,
because
and
like
my
my
sense
for
what
we're
doing
right
now
is
that
you
know
we
can
stop
it
around
this
point
where
we
say
it's
you
know,
cozy
doesn't
define
this
well
yeah.
Well,
I
I
will
define
it.
I.
B
D
And
I'll
tell
you
why
it
sounds
reasonable,
because
the
bucket
location
itself
is
actually
doesn't
matter
like
the
only
thing
that
matters
to
the
driver
is
to
to
choose
correctly.
The
the
you
know
the
topology
for
the
pods
that
access
it.
It
doesn't
really
care
if
you
know
that
the
region
for
the
bucket
is
called
the
us
east,
17
or
us
east
16
right.
It
just
has
to.
A
D
Yeah
but
then
it
becomes
the
driver's
role
to
to
implement
that
request,
that
optimization
or
whatever,
like
a
request,
but
from
kubernetes
perspective
kubernetes
only
knows
topology,
which
is
a
you
know:
kubernetes
cluster
topology,
right
nodes.
A
E
D
D
Yeah
yeah
yeah-
I
I
think
so
just
to
to
wrap
up
on
that.
My
thought
processes
that
you
I
agree
with
you
that
kubernetes
needs
to
understand
the
topology
of
the
access
right.
It
needs
to
know
that
that
that
these
nodes
can
access
a
bucket
on
this
region,
but
well
it's
not
kubernetes.
That
knows
it.
Actually,
it's
only
the
driver
that
has
this
notion,
but.
A
The
kubernetes
itself
doesn't
know
that
the
the
topology
constraints
that
exist
in
the
csi
side.
There
are
two
levels:
there's
there's
hard
constraints
and
soft
constraints,
so,
like
hard
constraints
means
like
if
it's
not
here,
it
will
literally
not
be
usable
and
soft
constraints.
Just
means
like
we
would
you
know,
while
it's
usable
everywhere
like
we
prefer
it
to
be
here,
because
it's
better
for
some
reason
right
whether
it's
cost,
whether
it's
performance,
it's
your
ability
to
express
a
preference.
A
So
so
those
are.
The
two
levels
is
there's
the
hard
constraints
which
means,
if
you
put
it
here,
it
literally
won't
be
accessible
and
then
there's
just
if
you
put
it
here,
it'll
be
sub-optimal
and
you
can
pass
both
of
those
in
with
csi.
Today
you
can
have
hard
topology
constraints,
soft
topology
constraints.
A
D
It
sounds
good
to
me
to
to
start
with
this
notion,
because
I
I
think
that
it
doesn't
really
matter
the
actual
region
of
the
bucket,
to
be
honest,
like
that,
the
fact
that
aws,
for
example,
has
a
fixed
number
of
regions
for
that
for
for
buckets,
say
and
doesn't
really
affect
how
the
kubernetes
cluster
looks
at
at
its
own
resources,
and
the
driver
can
map
these
bucket
regions
differently
to
the
cluster
regions
right.
The
cluster
topology.
D
However,
it
wants
to
it's
it's
it's
the
driver
decision.
Basically
there,
it's
not
that
kubernetes
asserts
okay,
it
has
to
match,
by
name
that
the
bucket
and
the
pod
has
to
have
the
same
topology
label
on
them
or
something
like
that
right.
D
So
so,
basically,
what
we're
saying
here
is
that
the
driver
just
needs
to
allocate
the
bucket
based
on
these
topology,
based
on
the
topology
information
that
is
in
the
cluster
right
saying
these
nodes,
these
labels
etc
and
then
map
that
to
the
bucket
world,
where
it
might
be
completely
different,
topology
names
and
regions
and
availability
zones
or
whatever
and
then
apply
that
when
it
allocates
it
and
that's
it
right
and
then
just
return
back.
Okay,
that
this
maps
to
the
cluster.
This
way.
D
Well,
there
could
be
mapping
because,
like
an
on-prem
cluster
wanting
to
access
a
remote
bucket
might
still
have
some
considerations
between
nodes
that
have
access
to
it
and
doesn't
etc
right.
So
what
I'm
saying
is
that
the
the
cluster
topology
doesn't
have
to
be
in
the
same
language
of
the
world's
topology
in
the
sense
of
cozy,
not
cozy
in
the
sense
of
the
provider's
buckets.
A
But
I
maybe
we
just
need
a
poc
to
show
how
something
like
that
could
work,
because
you're
positing
like
an
on-prem
cluster
with
two
different
zones,
one
of
which
prefers
to
use
one
storage
zone
and
one
of
which
prefers
to
use
a
different
storage
zone.
And
then
some
mapping
needs
to
occur.
A
Oh
well,
okay,
yeah
yeah,
so
as
long
as
as
long
as
the
driver
is
speaking
to
kubernetes
in
the
kubernetes
zone,
language
and
all
the
storage
specific
stuff
hidden
from
kubernetes.
Yes,
that
would
work
fine,
because
then,
and
then
everyone's
speaking,
the
same
language
and
the
drivers
just
performing
mapping
internally
that
nobody
can
see.
D
D
D
One
knowing
the
the
topology
of
the
outer
world,
if
it's
really
outer
right
in
the
sense
that
there
are
object,
storage
out
of
the
cluster.
But
you
know
I
might
have
two
of
them
close
by
and
three
of
them,
far
away
or
whatever
preferences
and
etc.
But
the
driver
is
the
one
encapsulating
all
this
logic
of
how
this
maps
into
the
cluster
topology
and
has
to
hide
that
away
from
you
know
the
the
workloads
and
and
certainly
kubernetes
right.
D
So
with
that,
with
that,
I
I
feel
that
what
which
you're
suggesting
like
using
the
topology
from
csi
makes
sense,
because
that's
the
cluster
topology,
that's
how.
B
A
A
Understands
and
so
yeah
if
there
has
to
be
mapping
inside
the
driver
to
make
sense
of
it
that
there's
no
problem
with
that.
As
long
as
everyone's
speaking,
the
same
language.
B
B
A
Right
yeah:
well,
it
was
always
a
concern
about
this
hypothetical
feature
where
there's
like
an
s3v5
that
some
workloads
understand
and
some
don't
right,
and
because,
because
I
think
we
agree
that,
like
while
s3
v2
still
exists
like
nobody
actually
requires
it
or
uses
it,
it's
it's.
A
historical
curiosity
at
this
point
is
that
is
that
accurate.
D
So
I
I
see
it
in
the
in
the
wild
I'd
say
like
in
in
some
compatible
products
like
servers
which
are
still
compatible
and
not
supporting
v4,
but
it's
like
it's.
It's
be
good,
it's
beginning
to
phase
out
for
sure.
So
I
mean
unless
anybody
on
on
on
this
work
group,
I
guess,
has
any
restriction.
You
know
supporting
it
in
some
specific
fashion.
We
might,
we
might
just
say
okay,
so
cozy
doesn't
support
you.
You
can
support
it
otherwise,
but
or
with
a
class
or
whatever
you
want,
but.
A
A
To
allow
workloads
to
express
a
preference,
you
know
in
a
world
where
I
have
two
different
providers
and
one
can
only
provide
s3
v5
and
one
can
only
provide
s3
v4
and
the
workload
can
only
talk.
S3
v4
and
it
can't
talk
s3
v5
and
it
wants
to
ensure
that
it
gets
what
it's
what
it
needs.
B
A
D
A
A
A
A
B
B
B
Okay,
no,
no
man,
that's
three
types.
Let's
look
at
s3
types.
I
literally
just
had
these
two
fields.
Can
you
zoom
a
little
bit?
Oh
sorry,.
E
A
And
if
we
can
make
that
just
how
it
works,
then
it
seems
simpler
like
like,
where
all
the
subfields
matter
is
in
the
when
you're
doing,
access
granting
or
access
revoking
or
in
the
downward
api.
When
there's
a
bunch
of
s3
specific
stuff
that
needs
to
be
communicated
or
a
bunch
of
azure
blob,
specific
stuff
that
needs
to
be
communicated.
B
A
There's
just
a
string
and
then
all
of
this
all
of
these
fields
only
matter
when
you're
doing
access
access
granting
and
access
revoking
is
when
the
subfields
of
the
protocol
become
relevant
if
you're
just
creating
and
deleting
buckets,
you
never
need
to
look
at
any
of
these.
I
don't
think
no
you're
right.
D
So
what
you're
saying
is
so
sorry,
so
you're
saying
that
the
bucket
class
determines
the
allocation
parameters
and
that's
it
it's
not
it's
not
that
the
request
has
more
effect
on
the
allocation.
In
that
sense,.
D
No,
but
we
oh
okay,
but
you
said
that
the
bucket
access
request
does
have
to
provide
some.
You
know
like
ground
expected,
like
you
know,
base
expectations.
A
To
point
to
a
bac
so
that
you
could
distinguish
between
like
a
request
for
read,
write
access
versus
read
only
access
or
those
kinds
of
considerations,
but
again
like
I
don't
know
if,
if
any
of
those
fields
that
we
were
just
looking
at
with
gcs
on
an
unsaid
screen
would
be
relevant
at
that
point.
Right,
because
those
are
things
that
are
supplied
by
the
driver
to
the
to
the
workload
right.
B
D
A
A
B
B
Because
someone,
you
know
s3
protocol,
someone
could
do
service
account
style,
authentication
if
they're
inside
the
aws
environment,
but
probably
not
if
they're
inside
you
know
a
data
center
running
ceph
or
mineo.
B
A
Right,
but
it's
too
late
at
that
point
right
like
if,
if,
if
you've
created,
created
a
bar
and
it's
not
bound
to
anything,
then
there
is
no
service
account
to
set
up,
and
there.
A
Set
up
there's
my
service
that
can
be
referenced
at
the
time
when
you're
trying
to
grant
access,
because,
after
after
the
when
the
ba
when
the
bar
gets
created,
some
sidecar
is
going
to
see
that
and
it's
going
to
it's
going
to
see
a
ba
that
needs
to
be
completed
and
it's
going
to
go
talk
to
the
driver
and
say
grant
access
on
this
bucket
to
this
thing
and
then
after
that
is
completed,
kubernetes
has
to
have
all
the
information.
It
needs
to
actually
start
a
pod
and
connect
to
that
bucket.
A
So
I
do
see
an
issue
if
you
really
wanted
for
the
access
to
be
bound
to
a
service
account
that
you
kind
of
need
to
know
what
it
is
before
the
pod
is
created,
but
then
in.
B
D
A
E
A
E
A
A
How
do
you
document
like
you
need
to
use
this
field
if
your
particular
bucket
protocol
is
one
that
uses
service
accounts
for
authentication,
but
if
you're,
using
one
that
doesn't
use
service
accounts
for
authentication?
You
can
leave
this
blank,
like
that's,
that's
horrible
documentation!
Again,
it's
the
driver,
but
yeah.
A
C
A
B
B
All
right
so,
okay,
getting
rid
of
service
account
and
then
getting
rid
of
protocol,
specific
structures
and
just
having
an
enum
like
all
of
this
so
far,
there's
something
else
that
was
in
my
mind,
okay,
so
there
is,
there
isn't
one
other
thing
while
requesting
a
bucket,
you
can
specify
bucket
specific
parameters.
B
Okay,
this
is
get
insert.
Would
that
be
create
bucket.
B
D
We
have
like
four
minutes,
and
I
just
want
to
ask
you:
do
you
think
we
like?
I
think
we
are
trying
to
close
on
api
as
much
as
possible,
right
and
so
and
every
time
we
we
kind
of
jump
to
the
to
the
spec,
which
is
like
the
the
main
pieces
of
the
apis.
We
we
keep
on.
You
know,
adding
more
questions
and
and
making
more
changes,
and
I
I
would
like,
from
the
point
of
view
of
this
war
group
right.
D
Maybe
what
we
should
do
is
like
go
over
these
spec
structs
one
by
one
like
you
know,
there's
there
isn't
too
many
right,
there's
like
four
and
and
just
see
that
we
we
are
content
with
this.
D
D
Ye,
I
guess
so
I
mean
there's
always
like
you
know,
deep
dive
discussions
in
every
in
in
every
field
that
we
we
get
into
discussion
of,
but,
like
you
know,
we
really
want
to
get
the
api
review
done
right.
That's
like
the
main
goal
for
for
this
project
to
get
to
get
going,
and
we
seem
to
be.
You
know
going
going
in
our
circles
around
these
specs.
You
know
making
these
changes
and
back
thinking
about
it
again,
which
is
perfectly
normal,
of
course,
but
I
think
maybe
we
can.
D
B
Well,
so
you
know
I'm
at
least
you
know,
I'm.
I
have
a
view
of
the
big
picture
and
you
know
one.
One
good
way
to
you
know
get
the
big
picture
right
is
is
actually
to
just
keep
solving
problems
one
by
one
because
eventually
we'll
get
to
the
right
point.
So
that's
the
kind
of
approach
that
I'm
just
following
here,
but
I
think
I
think
you
make
a
good
point.
B
Let
us
do
an
internal
review
and
let
us
look
at
the
whole
thing
as
as
one
because,
because
not
everyone
solves
problems
the
same
way
and
you
know
maybe
there's
a
perspective,
I'm
missing
and-
and
you
know
we
can
add
it-
you
know
you
can
you
can
help,
make
it
better.
So
I'll
do
that
diagrams
and
and
a
full
picture
of
what
the
aps
look
like
and
we
can
go
from
there.
Would
that
help.
D
D
D
B
Oh
no,
not
at
all,
I
I
don't
take
it
that
way
either
so
we're
definitely
following
up
and
we're
taking
out
fields
that
we
don't
need.
I
don't
think
this.
This
there's
been
a
time
where
we've
you
know,
we've
talked
about
something
and
we've
come
back
and
then
solved
the
same
problem
the
second
time,
but
because
we
we
follow
up
right
away.
We
have
some
people
actively
writing
code.
B
It's
just
things
are
still
in
flux,
because
there
are
still
some
unanswered
questions,
and
you
know
service
account
is
a
good
one,
for
instance,
and-
and
I
think
last
week
or
two
weeks
ago,
this
this
question
about
protocol
came
up,
and-
and
you
know
we
wouldn't
be
discussing
it.
If
it
wasn't
something
that
that
you
know
if
it
wasn't
a
sensible
question
so
the
way
I
know.
E
D
In
between
how
to
scope
this
first
release,
I
I
think
we
were
like
once
at
one
time
we're
saying
this
is
a
must
and
at
the
second
time
we're
saying.
Well
we're
not
sure
so
we
can
drop
it
right
so
and
then
we
add
it
back
again
in
some
other
form
of
a
class,
or
you
know
I
I
think
we
are
doing
you
know.
What
do
you
call
that,
like
a
converging
series
of
changes,
I'm
not
saying
it
diverges,
but.
B
Like
service
account,
for
instance,
yes,
it
also,
you
know,
is
a
function
of
the
the
people.
The
actors
in
the
group,
as
in
you
know,
some
concerns
are
more
important
for
some
people
than
than
others.
B
It's
just
service
and
everyone's
concerns
are
important
to
me
now.
Service
account
was
something
that
andrew
from
gcs
brought
up
and
it's
a
very
important
concern,
but
we
don't
have
a
good
answer
yet
and
and
that's
why
I'm
okay
with
saying
we'll
we'll
we'll
do
it
later?
This
was
an
unanswered
question
all
along.
It
was
never
something
we
figured
out
entire.
You
know
in
its
entirety,
so
I
you
know,
I
would
say
I
would
say
yeah,
I
would
say,
think
you
know
looking
at
the
full
picture.
B
Maybe
I
should
do
a
better
job
of
communicating
it,
but
we're
definitely
converging
and-
and
I
don't
think
we're
throwing
things
you
know
we're
bringing
things
into
scope
and
autoscope
just
willy-nilly.
B
D
So
do
you
think
these
discussions
help
with
with
passing
on
the
api
review,
then
so
the
api.
B
Review
is
is
is
like
is
like
a
step
that
that
follows
having
a
good
design
so
getting
the
design
right
is
far
more
important
to
me.
I
I
know
I
know
we're
all
focusing
on
on
getting
the
api
review
correct,
but
if
the
design
isn't
right,
it
doesn't
matter
if
the
if
the
api
review
deadline
is
reached
or
not.
D
I'm
so
so
the
one
tool
that
we
we
use
a
little
bit
sparse
is
is
scoping.
We
some
we
sometimes
use
scoping
more
aggressively
and
sometimes
we're
we're.
Not
you
know
it's
it's
we
keep
on
discussing
things
which
we
are
not
sure
are.
We
are
going
to
use
specifically
in
the
in
the
group
that
discusses
it
and,
like,
in
the
case
of
service,
accounts
right,
we're
saying.
D
B
Okay
yeah,
they
need
to
be
here
so
okay,
I
mean
this
is
where
we
discuss
and
and
they're
aware.
Andrew
is
aware.
So
you
know
I
could
personally
go
and
talk
to
andrew
and
ask
him
to
join
us
again,
at
least
for
the
next
few
meetings.
But
but
that's
the
reason.
D
Right,
the
thing
is
that,
once
you,
you
know
how
this
these
things
work.
Whenever
you
get
to
the
end
line
of
you
know
the
review.
D
That
somebody's
not
like,
was
involved
in
all
the
p
and
all
the
stages
and
now
doesn't
get
the
the
the
any
level
of
control
that
was
was
desired
for
it
for
his
needs,
but
and
can
affect
that.
So
I
guess
that
I
don't
have
a
solution.
I'm
just
saying
that.
B
B
It
happened
the
last
the
first
time
it
happened
quite
a
bit
the
second
time
we
we
got
through
it,
but
this
time
I
think,
it'll
be
even
easier
so
yeah
when,
when
people
came
in
the
last
minute,
just
to
review
the
cap
rather
than
you
know,
participating
in
the
discussions,
they
didn't
bring
up
concerns,
but
but
we
addressed
them
the
first
time
and
then
the
second
time
they
were
smaller
set
of
concerns
and
this
time,
if
something
comes
up,
we'll
address
them
again,
I
I
don't.
B
I
don't
have
a
I
mean
like
if
you're
not
participating
actively.
We
can
only
do
so
much
that
that
satisfies
your
needs.
Now.
I
think
we
should
go
forward
with
what
we
have
what
we
have
and
and
if
concerns
come
up,
we
we
deal
with
them
one
by
one.
I
I
I
don't.
I
don't
see
another
way
because
you
know
if
you're,
not
in
the,
if
you're
not
participating
in
the
group.
It's
it's
hard
to
know
what
you
need.
D
Yeah,
so
I'm
very
happy
with
the
discussions.
I
think
I
mean
I
I
participate
every
time
because
I
feel
I
feel
very
engaged,
but
I'm
also
concerned
a
little
bit
that
you
know
it's
it's
still
in
the
discussion
and
we
can't
give
get
like
the
first
api
approved
or
something
that's
kind
of
something
that
I
I
personally
I
feel
well,
maybe
maybe
I'm
a
little
impatient
like
in
that
sense.
Maybe
it
takes
more
time,
but
you
know
anyway,.
B
B
It's
a
good
thing,
I
would
say
I
would
say:
let's,
let's
push
it
again
together,
like
api
review,
is
something
the
way
I
look
at
it
is
it's
it's
it's
something
that
that
you
know
should
not
affect
the
actual
discussions
that
we
have
in
terms
of
scope,
because
api
review
is
just
I
mean
like
what
we
need
for
alpha
is
pretty
much
where
we
are
already
any
any
discussions
going
on
further
is
post
alpha.
B
E
Think
I
just
think
she'll
still
have
a
like
timeline
right,
so
I
think
it's.
I
think
it's
really
time
to
submit
a
cap
and
the
get
tim
to
start
reviewing
again.
E
B
Right
right,
I
I
think
what
guy
was
bringing
up
is
you
know
we
don't
have
a
clear
picture
of
the
scope
and
also
like
how
we
deciding
to
have
something
in
how
we're
deciding
to
push
something
for
later
that
that's
a
very
valid
concern.
Yep.
D
And
I'm
pretty
pretty,
I
mean
open
to
have
everything
up
for
later,
assuming
like
we
just
know
how
to
go,
go
about
it
later
and
we
feel
confident
enough
that
the
base
design
is
good
for
the
basic
feature
but
yeah
I
mean
I
completely
trust,
trust
your
process,
the
process
that
you're
taking
this
through-
and
I
appreciate
it.
Of
course
I
mean
I
would
be
happy
to
to
invol-
be
involved
in
any
review
process
or
anything
like
that.
D
B
B
Yeah
thanks
yeah.
Thank
you
yeah.
Thank
you
right
now.
The
way
I
look
at
it
is
the
cap
itself
is
on
me
and
you
know,
depending
on
feedback,
we'll
have
we'll
have
things
to
discuss,
but
as
of
right
now
we
should
just
keep
moving
forward
and-
and
you
know
having
these
discussions
because
the
momentum
of
the
group
also
matters
and
then
any
problem
that
we
see
as
a
potential
problem,
we
should
discuss.
There's
one
thing
that
I
would
appreciate
help
on,
which
is
which
is
development.
B
We
we
need
more
people,
testing
out
or
writing
prototypes
of
all
this
all
these
designs
that
we
come
up
with,
because,
while
writing
code
there'll
be
some
things,
that's
discovered
that
that
might
have
to
be
addressed.
A
So
I
I
know
I
know
we're
way
past
our
our
time
and
I
I
wanted
to
agree
with
what
you
said
about
sid
and
guy,
and
just
really
briefly
on
the
point
that
you
started
sid
all
of
that
stuff
that
you
can
specify
at
create
time.
I
say
those
are
bc,
opaque
fields
and
leave
it
at
that
and
yeah.
We
don't
need
creation
parameters
in
our.