►
From YouTube: Kubernetes Federation WG sync 20180326
Description
See this page for more information: https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md
B
A
B
So
yeah
last
meeting
we
were
talking
about
about
some
open
pointers
which
I
thought
of
as
problem
solving
pointers
from
and
some
carried
all
over
from
traditions.
Even
so,
we
actually
did
talk.
If
you
look
at
the
meeting
notes
the
I
can't
and
making
notes
for
editions
of
the
book.
Now
we
had
that
Holly
in
the
five
eight
six
items.
So
there
was
one
item
about
versions
cube.
Then
there
was
one
item
about
which
version
the
burn
rate
at
the
KPI
should
be
using
to
cut
rate
to
talk
to
her
detect
ourselves.
B
There
was
an
item
about
the
authentication
is
authorized
external
and
the
two
items
one
was
about
the
fall
things
and
was
about
validations
and
the
last
one
on
the
list
was
position
of
greats.
If
we
need
in
strategy
for
same
so,
we
did
discuss
until
validations
and
which
we
were
still
talking
when
we
ran
out
of
time.
So
we
can
continue
forward
to
that.
B
Yeah,
so
there
was
no
clear
solution
for
validations
validations
as
of
now,
and
the
solutions
proposed
were
mainly
about
like
in
the
users
of
this
API
know
that
they
might
face
failures
in
in
the
difference
in
applications
that
might
appear
when
template
is
selected
in
a
federated
control
plane
and
that
same
thing
is
been
submitted
to
a
pastor
and
I
guess.
This
might
be
okay
to
move
forward.
As
of
now,
we
can
talk
about
addition
of
grades.
A
Had
a
quick
question
about
validations
and
sorry
I
think
I
missed
the
last
meeting.
You
just
do
remind
me,
maybe
others
what
exactly
the
problem
with
validation
is.
So
it's
not
clear
to
me
why
yeah
so
validate
you
know
given
a
spec,
sorry
a
template
spec
for
whatever
it
happens
to
be.
Why
was
it.
B
It's
this
how
a
bowl
and
creats
that's
respected
right,
so
the
same
defaulting
and
validations
do
not
apply
to
the
template
spec
when
we
use
it
in
the
v2
API.
So
there
are
a
couple
of
ways
suggested
for
defaulting
Maru
found
some
mechanism
which
actually
works
around
the
problem
as
of
now
so
what
he
is
doing
is
he
basically.
B
Compares
the
version,
the
version
of
the
template
resource
with
respect
to
the
version
that
is
created
in
the
joining
cluster
on
a
federated
cluster.
So
that's
not
necessarily
a
problem
but
validations
the
same
validation
which
exists
indicators,
API
resource
do
not
apply
or
are
not
done
on
the
template,
expect
which
accident
is
submitted
to
the
editor
controlled
name.
B
The
last
time
we
talked
about
it.
The
solutions
were
like.
We
just
need
to
ensure
that
users
know
that
this
is
the
possibility
and
the
validations
that
happen
or
that
do
not
happen
in
production
control.
Plane
can
actually
fail.
Then
the
same
spec
is
or
the
same
resources
created
in
our
edited
faster.
Okay.
A
A
A
So
it's
not
clear
to
me
why
we
cannot
encode
in
the
Federation
control
plane
at
API
at
instance,
creation
time
why
we
cannot
validate
some
of
those
with
essentially
the
same
code,
I
mean
even
if
it
involves
taking
the
template
that
is
submitted
by
the
user
to
the
Federation
API
and
using
that
to
construct
a
kuben.
You
know
sort
of
dummy,
kubernetes
api
object
and
passing
it
through
the
normal
kubernetes
validator.
You
verify
that
that
that
that
template
is
is
valid
yeah,
but
it
doesn't
seem
conceptually
like
it's
not.
B
Possible
conceptually,
it
is
possible,
but
the
problem
is
that
if
you
want
to
use
the
same
validation
code
from
the
k8s
API,
then
that
particular
code
is
not
basically
into
a
separate
library.
It
is
inside
the
co-creators,
and
that
basically
means
you
just
for
that
piece
of
code.
Either
you
render
in
chaos
or
duplicate
the
whole
portion
of
coordinate
the
finish
inspector,
but.
A
C
I'm
gonna
be
the
dissenting
voice
here.
I'm,
not
sure.
That's
true
I'm,
not
saying
it's
it's
impossible,
but
this
is
like
an
internal
detail
of
the
API.
This
is
not
published
at
present
and
that's
not
because
it's
because
they
want
to
be
able
to
variate.
You
know
without
necessarily
exposing
the
internals.
So
the
idea
that
they're
going
to
publish
it
just
for
us
just
so
we
can
consume
it
like
that'll,
be
a
negotiation.
I,
don't
think
that's
a
slam-dunk
and
the
other
concern
is
well.
C
Yes,
I
can
create
the
template,
but
what
about
overrides
like
is
the
implication
that
I
actually
have
to
create
every
possible
form
of
the
resource,
including
its
overrides
and
I,
also
have
to
take
into
account
maybe
the
target
version,
because
that
can
skew
like
forwards.
But
you
can
see
you
remember.
Clusters
may
not
have
the
same
validation
code.
There
could
be
late
issues
with
that.
Well,
I,
guess
the
concern
wasn't
so
much
that
we
ignore
this
indefinitely.
It's
that
short
term.
C
This
is
what
we're
gonna
do
and
maybe
maybe
actually
explore
what
sort
of
issues
users
encounter
when
we
don't
do
validation,
upfront,
they're
gonna
have
to
be
able
to
examine
the
logs
and
detect
propagation
errors
with
whatever
mechanism
and
up
using
some
of
the
errors
may
be
extremely
problematic.
Some
of
them
may
be
trivial
and
you
know
papered
over
with
documentation
but
I
think
having
actual
user
experience
dictate.
What
solution
we
end
up
with
is
is
preferable
to
trying
to
solve
a
really
hard
problem.
In
the
absence
of
that
experiences,
yeah.
A
And
yes,
there
are
certain
problems
which
I
think
it's
unavoidable
will
only
be
detectable
when
so
once
we
try
and
propagate
things
to
the
underlying
clusters,
but
I
think
it's
also
worth
noting
that
the
burden
on
the
user
is
kind
of
fundamentally
higher,
so
generating
a
synchronous.
Api
call
error
is
easier
for
client
applications
to
deal
with
in
general
and
also
easier
for
users
to
deal
with
in
general
than
going
and
inspecting
logs
and
trying
to
figure
out
what
happens.
Asynchronously.
A
Too,
as
a
result
of
resource
creation,
for
example,
or
update
the
Federation
yeah,
so
I
think
I
think
where
possible,
we
should
try
and
do
as
much
validation
upfront
as
we
can
with
the
realization
that
we
cannot
do
100%
of
it
and
they
will
in
some
cases,
need
to
fall
back.
I
mean
I
guess
this
is
not.
This
problem
is
not
unique
to
kubernetes.
This
applies
sorry,
not
unique
to
Federation.
It
applies
kubernetes
as
well,
wherever
you
have
a
template
and
there
are
many
of
them
deployments,
replica
sets,
etc.
C
B
B
B
So
if
they
find
a
solution
for
the
same,
do
we
want
to
spend
the
very
good
time
on
that
upgrades
and
our
strategy
for
the
same
or
I
thought
that
I
would
anyways
to
list
that,
because
we
were
listing
the
open
problems
that
our
Federation
and
API
updates
would
also
be
something
there.
We
will
need
to
define
that
strategy
around,
but
we
can
delay
this
discussion
also
or
defer.
This
discussion
for
a
little
bit
also
I
mean
I,
don't
find
it
very
important
to
be
talked
right
now,
necessarily.
D
Let
you
go
in
through
the
today
issue
when
you
have
time
so,
if
I
understand
correctly
what
what
they
are
proposing
and
fundamentally
they
are
they
go
to
the
the
user,
an
open,
API
builder,
and
then
they
use
an
open
up
in
an
open
API
as
a
validator.
Now
the
real
problem
is,
is
open,
API
and
powerful
enough
to
validate
kubernetes
resource.
D
Probably
answer
is
no,
but
at
least
it's
something
that
is
is
better
than
nothing,
but
this
will
not
solve
the
problem
that
was
mentioned
by
was
mentioned
by
Maru
in
the
sense
that,
with
version
skewing,
we
will
need
different
validation
if
I
understand
correctly
the
Pro,
but
anyway,
this
is
just
to
add
something
to
the
discussion.
If,
when
we
have
time,
I
would
appreciate
your
feedback
on
on
this
approach.
B
A
D
D
A
D
D
D
B
C
C
Active
active
area
of
research
but
I
think
I'm,
I'm
hopeful.
This
will
bear
fruit
for
CR
DS,
because,
frankly,
the
future
of
Federation
inline
line
will
eventually
be
CR
DS,
in
which
case
we
will
be
referencing,
kubernetes
types
and
I'm
saying
that
not
to
say
we're.
We
want
to
approach
that
now.
There's
a
lot
of
Pima
Seng
pieces,
but
an
ideal
world
simple
propagation.
As
a
matter
of
creating
a
CRD
that
references
to
type
you
want
to
federate,
rather
than
having
to
generate
code.
C
Yeah,
you
know,
obviously
extra
special
behavior
might
require
more,
but
I
mean
not
just
please
remove
the
burden
of
having
to
run
a
separate,
API
server
and
just
run
controllers
against
kubernetes.
That
would
be
ideal.
I,
think
that
seems
to
be
the
direction
that
sort
of
the
community
around
extending
kubernetes
is
going
they're,
just
not
there
yet.
So
we
have
to
limp
along
the
old
way
of
doing
things.
D
C
I'm
kind
of
as
I
said,
like
I,
think
CRTs
are
kind
of
the
future.
I
really
don't
know
enough
about
them,
but
we're
as
we
as
we
switched
to
a
CRT
approach
for
foster
registry
and
and
also
as
as
Quentin,
you
pointed
out
many
times
we
are
gonna,
have
to
support
CR,
DS
and
so
understanding
them
at
a
really
fundamental
level.
I
think
is
key
to
this
effort.
D
A
Yeah
I
was
I,
was
involved
in
this
in
the
kubernetes
world
right
in
the
beginning,
and
so
I
don't
think.
Kubernetes
is
perfect
in
this
regard
and
I'm
sure
a
lot
of
work
has
happened
since
then,
but
my
general
suggestion
would
be
that
we
follow
kubernetes
approach
as
best
we
can
and
if
it's
inadequate
we
can
certainly
you
know
do
better,
but
but
at
least
do
as
well
as
kubernetes
and
the
general
approach
there
is,
you
know
some
degree
of
backwards.
A
Compatibility
reasonably
well
defined
definitions
of
what
alpha
beta
and
GA
mean,
and
those
are
written
down
already
for
kubernetes
and
I.
Don't
think,
there's
I,
don't
personally
have
an
objection
to
what's
written
down
at
the
moment,
and
we
can
just
adopt
that
unless
there's
specific
problems
that
I
can't
see
any
obvious
problems
with
adopting
the
kubernetes
approaches
currently
exists.
C
B
C
A
Yeah
I
mean
I
can
give
a
very
very
brief
summary
of
what
that
is
the
moment
if
anybody's
interested.
So
first
of
all,
so
alpha
means
you
know,
API
is
not
firm
and
may
change
or
be
completely
discontinued
and
I
guess
you
could
call
Federation
v1
an
example
of
that
then
beta
is
API
is
reasonably
firm.
No
major
changes
anticipated
implementation
is
not
necessarily
stable
yet
or
production
ready,
but
API.
A
You
can
basically
start
developing
against
the
API
and
not
expect
to
have
to
throw
all
your
code
away
such
that
in
the
future
sometime
it
will
become
GA
and
you
will
be
able
to
run
it
in
production
and
GA
means
the
API
is
firm,
will
receive
some
amount
of
backwards,
compatibility
in
the
future
and
the
default
implementations,
whether
where
such
things
exist,
the
controllers
etc
are
considered
to
be
ready
for
production
use.
Those
are
the
basic
meanings
and
I
think
that
makes
sense
and
I
think
we
can
do
much
the
same.
We
have.
A
We
have
perhaps
an
additional
layer
of
complexity
which
we
need
to
think
about,
but
which
I,
don't
think
is
is
particularly
challenging,
which
is
we
have
this
concept
of
which
version
of
kubernetes
cluster?
Do
we
support
or
kubernetes
resource
and
that
sort
of
tied
in
with
this
template
versus
which
version
of
the
Federation
API?
We
call
a
given
thing
and
so
I
think
we'll
have
to
augment
the
kubernetes
approach
with
that
aspect,
but
I,
don't
think
it's
you
know
needs
us
to
rethink
the
whole
thing.
I
think
we
just
have
to
augment
it.
A
I
think
TBD,
but
basically
we
have
this
mapping
that
says
Federation
API,
you
know
version
so-and-so
of
a
given
resource
type,
whether
it's
alpha
beta
G,
a
would
need
to
speak
to
some
extent
about
what
API
versions
of
underlying
kubernetes
clusters
are
supported.
What
the
behavior
is
for
those
and
these
kinds
of
things
I.
A
And
if
people
are
running
these
things
in
production,
which
we
hope
that
they
will
and
intend
for
them
to
do
in
the
future,
they
they
need
to
know
what
what
the
semantics
are.
So
what
clusters
can
they
run
underneath
and
and
what
are
there
any
caveats?
Does
it
work
completely
with
this
version
of
kubernetes
or
is
it
partially
supported,
I
think
that's
I.
C
That
actually
will
be
one
just
because
of
the
conversion
mechanism
in
clay
and
I.
Guess:
I'm,
not
sure
that
that's
a
problem
I
mean
I.
Guess
the
implications
are
defaults
like
if
I
created
a
v1
resource,
maybe
the
defaults
could
be
different
than
maybe
the
validation
could
be
different,
but
I
guess
the
the
challenge
would
be
is
that
the
controllers
could
have
subtly
different
behaviors
across
releases,
and
that
may
not
be
reflected
in
the
version
like
you
could
even
track.
Is
that
problem
makes
sense.
A
Yeah
I
mean
I,
think
you
can
track
these
things
with
permutations
of
versions.
I
mean
the
the
controllers
are
not
magic.
So,
given
a
kubernetes
sorry
given
a
Federation
API
version
and
a
controller
version
and
a
kubernetes
api
version,
it's
reasonably
well
defined.
What
will
happen
and
what
is
being
you
know
intended
and
actual
behavior.
So
just
you
know
formalizing
that
is
useful.
I
guess
I'm,
not
I.
C
Guess
I
need
more
confidence
that
that's
true
I.
Have
this
I
have
this
suspicion
that
across
kubernetes
versions,
even
though
the
API
may
be
the
same,
there
could
be
changes
to
the
underpinnings
of
that
resource
and
it's
related
controllers
that
could
have
resulted
in
different
behavior.
But
maybe
that's
just
me
not
being
well
educated
on
the
issue.
You.
A
Know
I
think
that
is
true
and
and
I
think
that
that
is
not
intended
behavior
in
the
sense
that
you
know
if
you
kubernetes,
forgetting
about
Federation
for
the
moment.
So
kubernetes
has
this
situation
where
a
bunch
of
people
write
software
that
runs
against
kubernetes
clusters
and
they
don't
upgrade
their
software
in
sync,
with
kubernetes
or
in
sync,
with
the
kubernetes
cluster
upgrades.
A
So
you
know
Federation
in
that
sense
is
actually
just
another
client
of
a
kubernetes
cluster
and
kubernetes
has
to
provide
backwards,
compatibility
with
its
clients
by
definition
and
has
always
intended
and
tried
to
achieve
that.
So
to
whatever
extent
that
is
not
true,
you
know
one
can
view
that
as
a
bug
in
kubernetes-
and
you
know
that's
the
kind
of
thing
you
put
in
release
notes,
and
you
say
for
this,
given
version
of
kubernetes,
it
actually
doesn't
provide
proper
backwards
compatibility
with
four
clients-
and
you
know,
clients
are
gonna,
break
or
behave
in
weird
ways.
A
C
Yeah
that
makes
sense,
I,
think
I.
Think
I
was
trying
to
simplify
the
problem
by
like
winning.
You
know
a
Federation
release.
You
know,
1.9
supports,
you,
know,
replica
sets,
v1
and
and
I
guess
the
thing
that
I've
been
ignoring
it.
So
the
version
of
the
Federation
reverse
resource
like
once
is
unless
we
get
out
of
alpha
and
it's
something
we're
willing
to
support
would
be
V,
1,
beta
1
and
then
the
move
to
the
one
may
or
may
not.
C
C
A
Think
in
theory,
it
doesn't
need
that
other
than
to
the
extent
that
you
know
it
supports
all
versions
prior
to
this,
and
but
in
practice,
as
you
pointed
out,
kubernetes
does
not
not
always
achieve
its
goal,
which
is
to
provide
proper
backwards
compatibility.
So
wherever
those
situations
arise,
we
will
need
to
at
least
be
explicit
about
them.
So
in
theory,
it's
simple
in
practice:
it
might
be
slightly
more
complicated,
okay,
okay,
so.
C
B
B
C
I'm
hoping
that
we
don't
ever
well
I'm
hoping
me
do
like
it's
more
like
we're
talking
about
V
1
beta
1,
V
1,
beta
2,
V
1,
V
1.1,
like
within
a
given
release.
What
you're
saying
absolutely
makes
sense:
I
just
had
this
shocking
realization
that
there
could
be
a
V
2
at
one
point
and
then
we'd
have
V
1
and
V
2,
and
we
probably
would
support
them
both,
probably
a
long
shot.
That's
gonna
happen
anytime
soon,
but
I
just
most.
B
Complicated
case
that
we
actually
part
of
is
that
tradition.
If
it
is
used
as
a
year
for
users,
then
they
would
expect
that
they
should
be
able
to
access
all
the
available
resources
in
the
cases
means
our
versions
of
the
resource
and
okay.
It
isn't
me,
but
we
are
totally.
You
know
not
thinking
about
that
right
now,
I.
C
C
At
least
I
mean
within
version
1,
so
V
1
beta
1
versus
V
1,
it's
the
same
at
CD
resource,
regardless
there's
just
conversion
so
that
you
have
api
compatibility,
but
underlying
storage
is
canonical
so
I'm,
just
kind
of
like.
Oh,
if
you
had
V
2
with
that
chain,
Shh
I,
don't
know
that
I.
Don't
think
anybody
is
really
thinking
heart.
Well,
I,
shouldn't,
say:
I,
don't
know.
I
I
I
haven't
heard
anything
about
the
possibility
of
having
a
v2.
C
A
That's
quite
a
lot
of
thought
that
has
gone
into
that.
It's
actually
documented
and
the
answer
is
I
believe
unless
it
has
changed
since
I
looked
at
a
year
or
two
ago,
precisely
that
that
V,
that
2.0
of
a
given
resource
explicitly
implies
API
breakage,
so
on
backwards,
compatibility
with
version
one
which
is
which
is
pretty
standard
in
many
products.
It's
not
like
too
surprising.
So,
yes,
it
might
be
called
a
replica
set
or
a
deployment
in
v2,
but
it
is
intend.
It
is
explicitly
you
cannot.
A
C
B
B
Okay,
the
last
point
was
about
the
control
clean
up
crates
in
case
of
active
control,
clean
I
guess
like
so.
For
example,
you
have
released
women
and
four
men
on
Federation
and
you
have
active
control
plane
and
you
had
red
particular
resources.
You
come
to
beta
2
there.
You
have
changed
the
version
of
of
the
template,
spec,
which
you
are
editing.
How
exactly
do
you
expect
users
to
upgrade
from
previous
version.
A
To
the
new
version
again,
I
would
I
would
suggest
that
the
solution
right
on
their
tails
of
kubernetes,
which
has
exactly
the
same
problems
and
exactly
you
know,
solutions
exists.
There
are
conversions
when
you
do
an
upgrade.
The
resources
and
HCD
are
upgraded.
There's
a
well-trodden
path
here
in
kubernetes.
We
should
just
do
do
the
same
thing.
C
B
C
I'm
I'm
not
sure
that
there's
like
it
seems
we
just
kind
of
have
a
different
philosophy:
I
mean
and
I'm,
not
sure,
they're
necessarily
thing
like
they're.
Obviously
it's
just
you
know
you
can
convert
one
into
the
other.
It's
not
like
they're
they're,
not
interchangeable,
but
I'm
kind
of
wondering
how
best
to
move
forward.
If
I
mean
it
could
be.
C
We
just
have
parallel
efforts
for
the
stuff
where
it's
different
I
was
kind
of
hoping
to
have
sort
of
a
common
common
objects
that
that
everyone
could
like
the
sort
of
in
it
or
consume,
and
everything
else
could
be
different,
but
at
least
should
be
kind
of
like
a
like
a
lingua
franca
or
whatever
between
them,
but
maybe
that
maybe
that's
just
an
unrealistic
goal
and
we
can
just
focus
more
design.
Coherence
like
in
general.
A
A
Yeah
I'm,
just
gonna,
suggest
that
we
go
ahead.
You
know
semi
independently,
at
this
point,
I
think
we've
we've
discussed
what
we
can
and
we've
shared
as
many
ideas
as
we
can
and
come
to
as
much
agreement
as
we
can
and
and
now
it's
gonna
be
a
case
of
just
you
know,
building
this
stuff,
we
yeah
so
we're
gonna,
just
move
ahead
with
that.
We
will
certainly
keep
this
group
appraised
of
what
we
do
and
you
know,
difficulties
we
run
into
or
successes
we
run
into
do,
etc,
but
yeah
I
think
I.
A
C
Yeah,
fair
enough
I
think
I'm
I'm
I'm
a
little
bit
sad
that
there's
not
an
opportunity
to
share
more
development
effort
as
I.
Think.
If
we're
not
sort
of
agreeing
on
the
primitives,
then
you
know
the
propagation
thing
that
you
know
for
the
Nord
like
represents
I,
think
it's
useful
I
think
it
represents
a
certain
amount
of
engineering
investment
and
we're
probably
gonna
continue
making
that
investment
and
if,
if
you're
gonna
want
to
use
different
primitives,
you're
gonna,
essentially
gonna
have
to
either
fork
that
or
create
something
new
and
and
it's
not
free.
So
yeah.
C
A
Use
what
we
can
and
we'll
either
focus,
or
we
will
change
plans,
so
we
can,
we
I
mean
depends
how
valuable
these
things
are.
We
can
we
can
make
those
decisions
along
the
way.
I,
don't
think
we
need
to
make
them
arrow.
It's
not
the
case
that
nobody
will
reuse
anybody
else's
code
or
nobody
will
you
know
everything
will
be
forked
and
lost
forever.
We
can.
We
can
do
those
on
a
case-by-case
basis
along
the
way.
C
A
So
I
haven't
I,
haven't
actually
had
a
chance
to
do
anything
with
that,
but
in
summary,
I
hope
to
be
able
to
do
it.
Maybe
this
week.
In
summary,
we
just
need
to
kind
of
clean
up
our
act
ever
so
slightly
regarding
the
registration
of
our
sick
and
working
group,
I've
now
actually
discovered
that
the
definition
of
a
working
group
is
something
that
does
not
own
code,
and
so
we
are
actually
now
technically
a
sub-project.
A
So
a
sub-project
is
a
thing
that
owns
code
and
a
working
group
is
the
thing
that
does
not
own
code,
apparently
so
that
you
know
these
are
just
terminology
changes.
So
so
we
just
need
to
put
that
in
a
document
and
save
we
are
multi
cluster
sig
here
is
our
manifest.
Here's
how
we
do
things
here.
You
know
all
the
basic
kind
of
operational
side
of
things
once
we
have
that
we
can
say.
A
Yes,
we
now
have
what
I
believe
will
now
be
a
sub
project
called
Federation,
and
it
requires
one
or
more
repositories
under
the
kubernetes
SIG's
organization
in
github,
and
these
will
be
approved,
and
we
can
then
move
forward
with
us
and
we
can
have
as
many
of
those
as
we
like
as
we
think
are
reasonable.
They
don't
have
to
be
production
ready,
they
are
basically
places
for
people
to
collaborate
and
they
fall
under
the
kubernetes
six
organization,
so
I
can
I
can
take
it
to
do
to
get
all
that
sorted
out.
So.
B
B
That
but
I
I
have
a
sedition
to
make
right
I
think
it's
not
me
necessary
that
we
really
have
to
work
in
seclusion.
I,
think
what
you
have
done
in
as
Nord
is
quite
useful
and
API
wise.
If
we
can
like
I
multiple
meetings.
I
have
suggested
that
at
least
what
I
would
I
would
like
to
happen
is
that
we
converge
on
the
API
and
that
API
is
kept
or
maintained
in
one
single,
echo
and
controllers
could
be
there
at
different
places.
B
The
implementation
of
controllers,
so
I
know
that
API
server
builder
works
in
such
a
way
that
it
generally
generates
the
controller
scaffolding
and
the
aka
scaffolding
together.
But
would
it
be
possible
for
you
to
just
segregate
the
API
and
the
controller
separate
in
if
not
or
maybe
create
than
more
than
before,
which
is
for
control
this,
and
then
we
can
probably
take
a
snort
API
as
the
base
and
I
can
try
to
figure
out
where
we
differ
and
how
we
can
merge
the
two
together.
B
B
C
C
I'm
happy
to
try
to
have
like
a
common
API
if
we're
gonna
be
using
sort
of
different
primitives
I'm,
not
sure
how
useful
the
API
stuff
and
Nord
is
actually
going
to
be,
but
having
it
be
consumable
and
I
kind
of
take
your
point.
The
API
server
builder
doesn't
really
doesn't
really
easily
allow
partitioning
and
a
maybe
that
maybe
the
yeah
I
don't
know
how
to
break
it
apart.
C
It
is
and
I
guess
the
only
complication
with
that.
No
I
guess
it
doesn't
really
matter.
I
was
just
kind
of
like
I
was
puzzling
over
like
wall
for
not
using
API
server
builder.
What
the
yeah
I
think
it
it's
it's
it's
the
technical
challenge.
It's
not
something
we
can't
overcome,
and
maybe
we
can
work
with.
C
B
Yeah
I
mean
how
exactly
do
it
or
what
might
be
the
best
mechanism
to
follow
to
arrive
at
this?
We
can
think
about
like
take
some
time
to
do
that.
But
what
I
meant
to
say
was
that
probably
at
least
for
the
API
I
would
I
would
really
like
it
to
be
conversion
in
some
place
and
I
am
willing
to
oh
well
outside
probably
different
abstraction
API
is
that
can
maybe
sync
with
what
Quinton
has
proposed
right.
B
C
To
me,
I
think
that
would
be
a
better
near-term
solution
and
we
can
just
sort
of
agree
on
a
partitioning
in
terms
of
like
how
we
review-
and
you
know,
people
want
to
have
oversight
over
things
great.
If,
if
the
oversight
isn't
necessary,
we
can
agree
that
you
know
you
want
to
mind
your
code.
You
can
merge
your
code,
we
can
merge
ours
like
depending
on
what
makes
sense
what
are.
C
A
Well,
I
mean
so
we
need
to
we
need
we
as
when
I
say
we,
so
so
the
company
I
work
for
need
to
build
something
like
this
for
real
customers
starting
today,
we've
got
kind
of
wait
any
longer.
We've
tried
for
nine
months
to
converge
and
we
got
some
of
the
way
there,
but
we,
you
know,
we
still
don't
have
converged,
API
and
I.
Think
that's
fine.
You
know
we
try
it.
We
I
think
hopefully
have
a
better
understanding
of
where
the
differences
lie.
A
So
we're
gonna
start
building
stuff
now,
and
we
will
do
as
much
of
that
as
we
can
in
the
open
and
people
can
use
it
or
not
or
make
change
proposals
to
it
and
I
think
that's
fine,
and
if
there
are
other
companies
that
don't
want
to
use
that
or
want
to
take
a
different
approach,
that's
fine
and
they
can
do
that.
But
I
don't
think
we
and
at
some
point
you
know
one
or
other
some.
A
A
E
E
C
A
Implies
yeah
one-
and
this
might
be
premature-
I
just
want
to
put
it
out
there
briefly,
so
we're
currently
meeting
twice
a
week
which
may
be
excessive
in
the
future.
So
maybe
we
want
to
talk
about
reducing
that
back
to
what
we
were
before
we
sort
of
gone
through
the
Kickstarter,
firt
I.
Think
it's
probably
worth
going
back
to
you
know
every
second
week,
or
maybe
once
a
week
rather
than
twice
a
week,
yeah
I
think.
B
Okay
and
yeah
about
the
difference
in
the
API
I
think
I
mentioned
I
did
go
through
the
rules.
Implementation,
I
am
and
I
have
seen
the
documentation.
The
primary
difference
there
I
see
is
in
the
level
of
abstraction,
I.
Think
I
didn't
use
the
same
word
so
yeah.
That's
just
some
input.
To
put
you
and
Papa
gonna
be.