►
From YouTube: Kubernetes Federation WG sync 20180307
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Giving
in
the
last
meeting
that
we
actually
were
discussing
or
talking
around
the
dog
that
he
published
and
the
expectation
in
the
last
meeting
was
that
some
people
mentioned
that
they
needed
more
time
to
go
to
the
dog
more
thoroughly
and
might
be
to
do
that.
If
that
hasn't
happened,
we
can
postpone
that
particular
discussion
to
next
meeting
and
we
can
define
or
decide
what
might
be
today's
agenda
now.
I.
B
Would
be
interested
in
providing
a
brief
walkthrough
of
progress
so
far
from
the
sonoran
side,
still
working
through
our
back
issue
around
joining
so
demoing
it
not
quite
yet,
but
there
have
been
sort
of
and
I
think
just
pointing
out
where
we're
at
might
be
useful
right,
because
this
may
be
a
deeper
dive.
It's
not
gonna
be
high
level
like
documented
it's
been
discussed,
but
to
me,
like
some
of
those
implementation
issues
are
kind
of
there
driving.
A
A
B
C
Sure
yeah
well,
it's
actually
called
cube
Nord
in
this
particular
repo,
but
it's
basically
taking
the
same.
It's
leveraging
the
the
cube,
fed
joint
operation
and
also
integrating
with
the
cluster
registry
to
join
clusters
and
create
federated
cluster
objects
and
the
necessary
arbok
service
accounts
secrets
to
be
able
to
access
all
the
resources
in
that
cluster.
So
it's
right
now:
it's
only
a
joint
operation.
That's
that's
a
that's
been
implemented,
but
yeah.
It's
basically
leverage
leveraging
cube,
fed
in
a
fan
or
way
with
the
cluster
registry.
No.
A
C
A
B
So
anybody
familiar
with
the
existing
Federation
v1
architecture
will
be
familiar
with
how
things
are
being
done.
Okay,
then,
if
an
Orion
prototype,
so
you
have
your
sync
controller.
You
have
an
adapter,
that's
written,
that's
kind
of
like
type
agnostic,
and
then
you
have
type
specific
implementations
of
that
adapter
and
basically
an
adapter.
It
ties
the
controller
to
underlying
concrete
types.
In
the
case
of
a
secret
we've
kind
of
broken
things
down.
We
have
a
template
resource,
federated
secret
replacement,
resource,
federated
secret
placement.
B
We
have
an
override
resource,
federated
secret
override,
and
so
you,
dr.,
basically
ties
implements
all
the
interfaces
that
a
sync
controller
needs
to
propagate
a
federated
type,
those
of
multiple
resources
and
propagate
that
to
member
clusters.
You
know
like
create
a
secret
in
each
cluster.
That
is,
placement
was
defined
for
so
this
is
I
mean
this
doesn't
deviate
a
lot
from
what
Federation
v1
does,
except
that
it
not
using
annotations
excusing
discrete
resources
for
different
areas
of
responsibility.
B
B
So
this
is
more
in
keeping
with
Federation
v1
I
mean
in
Federation,
v1
I
think
the
over
real
override
would
be
replicas.
You
didn't
have
the
ability
to
override
labels
or
images
or
anything
like
that.
So
I'm
not
saying
this
is
ideal.
This
is
just
kind
of
like.
Let's
get
something
implemented,
let's
get
things
actually
propagated
in
clusters
and
be
able
to
vary
some
things
per
cluster.
B
So
this
works
today
the
test
run
integration,
we're
not
using
our
back
or
the
services.
The
service
account
that
that
Ivan's
working
on
quite
yet,
but
that
the
intent
is
that
will
be
the
way
forward.
It'll
be
a
little
bit
more
realistic.
The
one
gotcha
that
I've
run
into
you
know.
This
has
been
related
to
defaulting
I'll.
B
Pole
policy
for
the
container
spec
you
know,
replicas
set
would
be
set
to
always
as
soon
as
we
move
away
from
that,
and
we
start
defining
a
federated
resource
via
like
this
separate
template
resource
which
I
will
post
to
the
problem
is,
is
when
I
create
this
resource
in
the
Federation
API.
It
doesn't
know
anything
about
cube
defaulting,
so
any
field
that
I
do
not
provide,
as
but
I
don't
provide
in
the
Federation
template.
B
B
Field
policy,
blank
and
the
member
cluster
says
image.
Field
policy
always
and
I
have
no
way
unless
I'm
doing
field
level
specific,
like
defaulting
in
the
Federation
I'm
replicating
exactly
what
kubernetes
does
the
reconciler?
The
sink
controller
will
say:
oh
there's
a
difference,
let
me
send
an
update
and
the
next
time
the
loop
comes
through.
It
will
say:
oh
let
me
send
an
update,
and
so
you
you
get
what
I
call
a
reconcile
loop.
B
That
will
never
end
so
the
way
I've
worked
around
this
because
the
tests
are
passing
today
is
I'm
actually
providing
values
for
all
the
fields
that
could
potentially
be
defaulted,
and
it
can
give
you
a
link
to
an
explanation
of
that.
Well,
then,
it's
more
concise.
What
I'm
saying
now,
but
also
if
you
look
at
this
test,
object
all
the
fields
that
are
coming
that
we've
defaulted.
Those
are
all
the
fields
that
I
had
to
provide
explicit
values
for
to
avoid
a
reconcile
River.
B
Just
because
I
don't
know,
I
haven't
asked
about
that,
but
I'm
assuming
that's
like
it's
not
externalized
as
a
staging
repo.
Today,
it's
not
something
we
can
vendor
we'd,
either
have
to
copy
it
and
maintain
that
or
we'd
have
to
somehow
I,
don't
know
one
anyway.
One
possible
way,
I
thought
about
was
copying
the
code
and
I
discarded.
That
idea.
The
way
that
I've
sort
of
I
think
the
best
possible
solution
is
that
the
resource
version
would
be
captured
for
the
the
target
resource
like
in
a
memory.
Cluster
say:
there's
a
secret
that
we've.
B
You
know
that
we're
managing
via
Federation
I
capture
the
resource
version
of
the
last
update
or
add
that
I
performed
and
I
would
also
capture
the
resource
versions
of
the
template,
override
and
placement
that
that
generated
that
resource.
And
then,
when
I
saw
the
target
resource
at
a
given
version
and
I,
none
of
the
other
resource
versions
had
changed.
Then
I
would
know
that
I
did
not
have
to
perform
an
update,
but
even
though
the
values
were
different,
they
weren't
technically
equivalent.
D
B
E
D
A
B
So
here
we're
looking
at
like
component
resources
for
a
federated
replica
set
but
kind
of
started,
just
thinking
of
the
objects
that
I
call
like
federated
replicas.
That
is
really
the
template.
I
almost
attempted
to
change
the
the
naming
convention
so,
instead
of
being
federated
blank
federated
blank
override
federated
blank
placement.
It
would
just
be
replica
set
template
because
that
override
replica
set
and
just
rely
on
the
fact
that
it's
names,
based
in
like
Federation
decades
thought
IO
to
sort
of
to
know
that
it's
a
federated
resource.
B
What
do
people
think
about
that
idea,
at
least
in
the
near
term,
sorry
putting
it
in
under
under
which
group?
So
the
group
wouldn't
change,
but
currently
the
naming
convention
I'm
using
is
federated
blank
federated
blank
override
federated
blank
placements,
but
the
way
that
I'm
kind
of
considering
in
the
code
is
as
per
the
code
and
the
top
of
the
screen
share.
I
just
call
the
federated
replica
set
the
template,
because
currently
that's
all
it
is
so
I
have
maybe
a
better.
D
A
C
One
of
the
things
to
keep
in
mind
is
if
we
are
aggregating
the
federated
API
server.
When
you
just
want
to
get
at
a
particular
resource,
you
would
have
to
then
always
specify
the
group
as
well.
So
if
you
want
to
replica
sets,
but
you
wanted
the
federated
version,
you
would
have
to
say
replica
set,
you
know
the
group
name
should
still.
B
I'm
talking
about
the
name
of
the
type
like
on
the
top
of
the
screen
share,
there's
fat,
v1,
a1,
federated
replica
set
I'm
suggesting
then
federated
replicas
set
template,
so
it
wouldn't
be
replica
set
there
wouldn't
be
a
direct
correlation
between
cube
types.
So
hopefully
they
require
the
group
clean
to
be
specified.
B
B
So
as
like,
the
general
form
of
federated
type
template,
as
you
can
see
here
is
spec
just
includes
template
and
the
template
is
a
concrete
type
in
some
API.
In
this
example,
it's
all
just
in
the
cube
API.
You
could
as
well
be
like
an
aggregated,
API
placement
resource
story
on
line
359.
All
it
is
is
a
list
of
clusters.
B
Overrides
are,
as
I
said,
before,
there
type
specific.
So
in
the
case
of
a
replica
set,
it's
it's
basically,
a
mapping
of
cluster
names
to
replicas
count
for
that
cluster.
One
question
that
I
my
prototyping
is
sort
of
raised
for
me
is,
is
whether,
like
Paulie
you've
talked
about
maps,
bad
lists
good
or
something?
B
At
the
placement
spec
so
or
maybe
no
not
in
the
placements
back
because
I
have
duplicates
in
the
list
of
clusters.
It
doesn't
really
matter
because,
when
I'm
applying
placement
I'm
only
like
is
this
cluster
name
in
the
placement
list.
If
it
appears
twice,
I
don't
care,
but
if
I
have
overrides
like
starting
on
you
know
an
example
is
377.
B
D
So
it's
see,
I
will
dig
up
the
the
guidance
on
why
lists
of
objects
are
preferred
instead
of
maps.
But
what
we
could
do
is
have
a
list
where
you
would
have
the
cluster
name,
and
then
you
had.
It
would
have
like
a
list
of
overrides
within
so
you'd
have
a
type
that
was
cluster
name
and
then
a
list
of
overrides
I
kind.
D
B
D
B
D
E
Go
ahead,
sorry,
so
he
talked
about
this
defaulting
thing
and
having
to
add
in
every
sort
of
faulted
and
field
one.
Do
we
care
about
the
case
or
worry
about
the
case
now
about?
Like
you
know,
we
go
to
the
next
version.
Kubernetes
another
field
is
defaulted,
like
is
that
gonna
cause
us
problems
so.
B
D
B
E
B
That
would
mean
that
as
the
defaulting
chain
between
cue
versions
or
if
you
had
multiple
cube
versions
in
a
Federation,
you
wouldn't
care
with
like
if
someone's
using
you
know,
a
cube
version
that
has
you
know
an
image
for
policy
of
this
or
configured
like?
Is
it
I
might
need
me
something
you
can
configure
I?
Think
for
the
cluster?
B
If
you
had
member
closets
with
different
configuration
that
provided
different
defaults,
the
Federation
wouldn't
care
like
if
you're
not
providing
a
value
for
a
field,
and
you
know
that
defaulted
I
mean
probably
should
document
that
that
could
happen.
But
the
synchronizer,
like
the
propagation
mechanism,
does
not
have
to
worry
about
the
implications.
It's
up
to
the
user,
how
they
want
to
deal
yeah.
D
Like
the
the
the
object
on
the
side
of
the
Federation
API
in
the
in
an
annotation
on
the
cluster
side,
so
that
you
can
have
a
reconciler
that
was
able
to
detect
this
one's
in
sync
with
me,
everything's
good,
it's
probably
the
way
to
go
and
just
for
context.
Greg
I,
don't
I,
don't
think
that
we're
intending
to
paint
this
particular
thing
as
anything
more
than
just
like
a
question
to
to
figure
out
the
answer
to,
rather
than
like,
representing
a
proposal
that
we
do
things
a
certain
way.
A
I
have
one
question
about
this
defaulting,
so
have
you
seen
how
exactly
does
say,
kubernetes
replica
said
store
the
template
defaults
like
kubernetes
replicas
set
to
has
so
for
template.
Does
it
right
now?
What
is
there
on
the
screen
so
that
data
would
be
stored
under
the
replica
set
right,
not
that
that
will
be
stored
under
that,
because
it's
the
culture
right,
I'm,
not
sure,
I,
understand
your
question.
Is
it
so?
A
Yes,
so
let
me
explain
or
do
it
so
the
department
in
logic
actually
kicks
in
when
the
particular
object,
the
tails
of
expect
that
you
specify
it
gets
stored
in
the
ATC
D.
So,
whichever
version
it
is
being
used
to
store
in
the
ATC
B,
that
is
a
default
which
is
actually
used
so
in
case
of,
for
example,
the
pika
set
in
kubernetes.
It
is
storing
the
pod
template
spec,
also
right.
A
B
A
B
Comparison
was
failing
and
when
I
looked
at
the
member
clusters,
all
these
fields
were
filled
in
as
you
see,
because
those
are
the
default
values
that
the
particular
API
was
targeting
was
filling
in.
So
as
far
as
like
secondary
step
up
like
a
replication
controller,
actually
creating
pods.
That's
completely
separate
from
this.
Even
before
you
get
to
that
point.
These
fields
are
getting
told
no.
A
B
A
B
F
B
A
B
B
A
B
Not
outside,
and
even
if
we
were
to
say
we
wanted
to
pull
the
defaulting
logic.
The
issue
is:
which
version
do
we
set
it
to
like?
We
set
ourselves
up
to
having
to
maintain
multiple
versions
of
that
and
expose
that,
whatever
that
to
the
user
and
versus
just
allowing
whatever
defaulting
is
happening
in
the
cube
level?
To
just
do
its
work.
F
A
B
B
My
concern
is:
if
I,
if
I
don't
default,
anything
if
I
don't
do
the
defaulting,
then
it'll
be
the
same
like
if
I
created
this
resource
without
the
defaulting
in
the
cube
API
via
synchronization
controller
it'll
appear
this
way
and
if
I
use
that
exact
same
yamo
and
put
it
to
the
cluster
via
cube
cattle
I
would
get
the
same
result.
That's
kind
of
my
goal
here:
I,
don't
really
want
to
have
specialized
behavior,
where
it's
like.
If
I
do
it
through
Federation
I
get
this.
A
A
For
example,
we
even
beta
1
or
V
1,
core
v1
API,
to
store
in
the
ATC,
for
example,
this
template
federated
replica
set
template
that
you
would
store
it
will
persist
in
the
Federation
API
server
also
right
and
to
persist.
We
would
be
using
a
particular
version
if
we
are
using
save
even
better
than
that,
because
it
to
even
determine
and
the
same
version
was
used
by
k8s.
Then
our
defaults
will
be
saying:
that's
what
I
meant
I'm.
D
So,
for
example,
there
is
not
a
way
like
one
validations
and
defaults
are
not
exported
from
kubernetes
and
I've
run
into
this
I've
run
into
that
particular
issue
in
the
context
of
another
problem
domain
where
we
wanted
to
have-
and
we
do
want
to
have
a
subset
of
the
pod
spec
that
you
can
specify
on
another
resource
and
there
isn't,
there
isn't
a
good
way
to
readily
consume
the
validations
and
defaults
associated
with
those
resources
outside
of
kubernetes
and
I.
Think
that's
part
of
the
problem
space
we're
talking
about
here.
D
Versions
of
which
defaults
are
applied
and
also
probably
which
validations
are
applied
in
kubernetes
and
that's
that's
a
tough
nut
to
crack,
because
it's
very
common
for
for
new
fields
to
be
added
and
the
API
version
of
a
particular
group
not
to
be
bumped.
So,
for
example,
there
are
a
lot
of
cases
in
kubernetes
where,
within
the
v1
resource,
for
example,
new
fields
have
been
added
and
need
to
false,
have
been
added,
but
because
they're
added
in
a
backward
compatible
way,
they
don't
necessitate
a
version.
D
Bump
I
think
this
actually
ties
back
also
to
one
of
the
foundational
questions
that,
like
that,
we
were
grappling
with
when
we
begin
discussing
a
federation
v2
API,
which
is
what's
the
compatibility
matrix
between
a
version
or
a
particular
copy
of
the
Federated
API
and
the
targets
for
particular
clusters.
Does
that
make
sense?
D
A
D
Okay,
that's
fine
I,
do
think
it's
a
good
question
and
I
expect
that.
Probably
the
fact
that
we're
now
talking
about
this
question
represents
progress.
I
think
I
think
it
will
be
an
important
question
to
nail
down,
because,
if
you
think
about
it
like
the
folks
in
the
audience
of
this
call,
are
likely
to
be
people
that
have
spent
a
significant
amount
of
time.
Thinking
about
this
problem
domain
and
from
the
last
10
minutes,
or
so
it
sounds
like
there's
no
super
clear
answer
to
this
question.
D
So
if
you
think
about
the
fact
that,
as
subject
matter
experts,
it's
tough
to
immediately
say
how
to
deal
with
this,
we
should
remember
that
users
are
going
to
have
an
even
tougher
time
for
it.
So
part
of
the
challenge
here
is
to
be
able
to
make
to
be
able
to
make
decisions
about
behaviors
that
one
are
intuitive
to
users,
to
the
extent
that
they,
if
possible,
feel
right
at
first
blush
and
definitely
feel
right
after
you
spend
some
time
learning
about
in
the
problem
space
and
then
also.
D
C
B
I'm
think
I'm
done
unless
there
were
unanswered
questions,
I
mean
that
just
was
just
getting
propagation
working.
The
major
challenge
I
ran
into
is
around
defaulting,
but
on
the
whole
I'm,
pretty
happy
with
having
kind
of
synchronization
separates
I.
Think
one
of
the
the
next
steps
in
terms
of
prototyping
is
going
to
be
implementing
federated
services.
B
Ivan
and
I
touched
on
that
briefly
this
morning,
but
the
idea
would
be
is
to
separate
the
current
implementation,
really
one
where
programming
of
external
DNS
and
synchronization
of
member
clusters
is
kind
of
done
in
lockstep
as
part
of
a
single
controller
and
we'd
actually
implement
something
separate
from
the
sink
controller.
That's
been
described
today
that
would
program
the
DNS
in
response,
probably
to
status
changes
versus
having,
and
the
reason
being,
is
that
I
think
we
have
some
users
they
just
want
to
propagate
services.
They
want
to
copy
the
service
definition
into
member
clusters.
D
So
this
this
thread
is
actually
reminded
me
of
a
question
that
Christian
Bell
asked
in
the
Monday
Federation
working
group
meeting,
and
it's
something
that
I
tried
to
respond
to
at
the
time,
but
I,
unfortunately,
Christian
isn't
here.
But
since
these
meetings
are
recorded,
he'll
be
able
to
watch
this
yeah
I
didn't
want
to
spend
a
few
moments,
though.
D
D
For
somebody
that
has
an
existing
investment
in
expertise
in
kubernetes
and
as
a
secondary
goal,
because
this
is
a
very
broad
problem
space
and
we
also
want
to
be
able
we
had.
We
had
to
kind
of
influencing
factors
on
one.
It's
a
very
broad
problem.
Space
and
developers
tend
to
be
very
opinionated
in
certain
cases
about
the
api's
that
they
expose,
for
example,
to
users,
and
we
want
to
have.
D
We
want
to
end
up
in
a
state
where
we
have
primitives
that
effectively
capture
the
problem
space
that
you
can
build
a
higher-order
api's
on
top
of,
so
that,
if,
if
you
have
a
great
idea
about
hey
I,
see
a
lot
of
really
interesting
pieces
here.
I've
got
this
great
idea
for
how
I
could
tie
them
together
in
an
API
that
a
user
would
use
directly
that
you
should
be
able
to
build
that
without
implementing
the
rest
of
the
stack.
D
So,
for
example,
I
in
the
overview
and
update
that
Maru
gave
today,
I
think
it's
pretty
clear
that,
like
it,
would
be
a
mistake
to
trivialize,
for
example,
propagating
resources
to
multiple
clusters.
That's
not
something
that
I
would
prefer
to
have
to
re-implement
if
I
had
an
idea
for
something
higher
in
the
stack.
So
we
want
a
modular
solution
or
set
of
primitives
that
people
can
use
in
program
to
intake
take
the
parts
that
they
want
and
implement.
D
The
rests
on
the
other
side
that
we
I
think
want
to
be
able
to
take
advantage
of
work
that
happens
in
the
community,
for,
for
example,
propagation
is
something
that
there
are
a
couple
different
efforts
and
tools
that
I'm
aware
of
they
do
that.
Do
something
very
close
to
propagation
and
they're
also
going
to
be
users,
that
might
say:
hey
I've
already
got
my
own
propagation
thing
that
I
want
to
plug
into
this.
E
D
B
B
How
would
we
find
a
solution
that
solves
some
of
the
use
cases
our
customers
are
asking
for
and
the
conclusions
that
I
came
to
was
that
in
order
for
any
given
like
mechanism
for
coordinating
an
application
of
configuration
across
clusters,
you
need
certain
capabilities
like
for
one,
maybe
name
spacing
so
you're,
not
having
configuration
stepping
on
itself
may
be
authentic.
You
know
some
degree
of
control
over
who's
doing
what
and
also
validation
of
configuration
like
oh
I
have
some
yam.
B
Oh,
how
do
I
know
that
it's
valid,
providing
easy
early
feedback
to
users
is
helpful
and
that's
not
saying
that
you
know
the
CI
based
or
CI.
Centric
mechanisms
aren't
useful,
but
in
the
case
where
you
want
people
to
be
able
to
ad
hoc,
create
and
define
you
know
the
configuration
they
want
to
propagate
to
multiple
clusters
and
have
it
access
controls
and
have
it
validated.
B
You
could
replicate
what
the
cube
API
does
or
you
could
just
use
the
cube,
API
and
I
think
that
there's
already
existing
solutions
like
you
apply
or
that
kind
of
fit
the
bill
in
terms
of
I
just
want
to
apply
some
configuration,
you
could
layer,
you
could
reproduce
what
the
cube
API
does
you
could
layer
it
on
top
of
that
and
that's
a
solution.
It's
just
one
like
from
a
kind
of
a
economic
perspective,
I've
kind
of
come
to
the
conclusion.
B
I
think
it
makes
more
sense
to
just
reuse
the
key
of
API
for
use
cases
that
require
those
properties.
If
you
don't
need
that,
if
you're
happy
doing
something
simpler,
if
your
use
cases
are
restricted
to
the
degree
that
you
can
afford
the
validation
of
Education
or
whatever
or
you
don't
need
it,
then
Federation
maybe
isn't
a
good
fit,
but
for
enterprise
users,
where
they
do
have
sort
of
policy
based
ways
of
working.
I.
Think
that
that's
important
in
that
case
Federation
does
make
sense
or
the
Federation
like
API
does
make
sense.
A
Yeah
I
me
that
that
was
a
good
I
wanted
that
we
still
have
15
minutes
and
I
had
two
more
items
which
I
thought
might
be
useful
for
discussion,
or
maybe
we
can
initiate
the
discussion
into
this
meeting
and
continue
forward
some
of
the
in
the
next
some
of
the
further
meetings.
So
one
is
the
second
point
that
you
mentioned
was
about
the
authentication
or
and
our
authorization.
A
So
I
actually
mentioned
one
comment
on
Quinton
stock
also,
which
is
towards
the
end
of
it,
saying
that
we
maybe
need
some
discussion
around
this,
give
keeping
that
v2
implementation
in
perspective,
like
we,
whoever
was
associated
that
even
earlier.
We
do
remember
that
one
of
the
unsolved
problems
there
or
the
problem
which
is
of
22
people
to
solve,
is
a
rich
user
of,
as
which
user
foundation
should
talk
to
the
nanoclusters
so
giving
given
the
v2
in
perspective.
Does
it
still
remain
the
same
challenge
or
is
it
different?
A
For
example,
we
have
talked
about
two
different
kind
of
approaches.
The
fully
based
approach
might
be
simpler
to
implement
in
terms
of
in
terms
in
terms
of
the
same
aspect,
one
other
point
which
I
wanted
Maru
you
to
take
five
minutes
which
you
can
take
before
that
is
maybe
five
minutes
will
dis.
Tell
about
how
exactly
you
have
hang
in
the
testing
in
the
current
implementation
that
we
are
going.
B
A
B
If
your
application
has
these
characteristics,
you
can
only
go
into
these
clusters
and
for
some
environments,
maybe
they
want
to
enforce
that
and
so
having
our
back
enabled
like
in
the
Federation
API,
that's
more.
What
I
was
talking
about,
we?
What
you
were
talking
about
was
how
does
Federation
like
a
synchronization
mechanism,
propagation
mechanism.
How
does
it
interact
with
the
member
clusters
and
for
now,
I'm
kind
of
assuming
that
the
status
quo
of
we
have
a
single
service
account
that
has
all
the
access
it
needs
to?
B
The
underlying
clusters,
to
you
know,
create
resources
and
update
resources,
delete
resources.
That's
that's
the
way
you
do
it.
If
you
want
in
a
push
based
model,
we're
not
going
to
approach
having
to
have
separate
service
accounts
that
would
have
limited
access,
I
say
to
a
single
namespace,
at
least
in
the
near
term.
B
The
reasoning
behind
that
is
that,
as
soon
as
we
start
limiting
access
to
service
accounts
and
per
namespace
basis
that
prevents
us
from
listing
all
resources
in
useful
way
like
if
you
want
to
list
all
resources,
not
just
like
per
namespace
basis,
but
just
isn't
something
that's
accorded
giving
API
machinery
today.
So
if
we
wanted
to
allow
restrictive
like
service
accounts
and
restricted
service
accounts,
it
would
effectively
mean
that
for
each
namespace
and
a
member
cluster,
you
would
have
to
do
a
separate
watch
and
that
got
just
had
so
much
complexity.
B
A
B
A
My
point
was
not
necessarily
to
arrive
at
a
solution
right
now.
My
point
was
only
to
I
think
it
is
an
important
important
point
that
we
need
to
discuss,
and
it
ideally
should
ideally
should
have
some
attention
from
all
of
us,
and
maybe
we
should
set
aside
some
time,
find
more
one
of
the
meetings
who
have
a
definite
of
the
settlement
that
what
we
are
we
all.
We
all
think
that
is
right
to
do
as
of
now,
and
if
there
is
a
problem
in
future,
then
there
is
some
alternate
for
that.
Also.
E
Yeah
I
think
makes
sense
to
have
a
deeper
discussions
about
this.
The
one
thing
I,
don't
know
that
me
was
about
like
oh
well,
maybe
users
are
gonna,
have
different
service
accounts
to
access
different
clusters,
but
the
service
account
before
each
cluster
would
be.
You
know
able
to
access
the
whole
cluster.
B
E
B
B
B
As
I
said,
the
way
the
API
machinery
works,
you
would
require,
like
watches,
are
per
namespace
basis
rather
than
a
cluster
basis,
and
so
it
may
prove
like
a
problematic
thing
to
do
depending
on
a
number
of
namespaces,
you
have
to
watch,
and
so,
if
we
were
going
to
implement
it,
I
think
it
would
be.
It
would
kind
of
have
to
be
like
an
either/or
thing,
like
maybe
on
a
per
type
basis,
but
like
I,
don't
think
it's
like
oh
yeah.
We
just
want
to
build
in.
D
It
does
have
a
cost
and
it
has
it's
an
implementation
cost
and
a
cognition
and
education
cost
for
users.
In
the
case
of
like
differentiating
the
service
account
by
namespace,
that's
used
to
communicate
with
the
cluster
I
I
could
see
potentially
something
that
for
like
everything
but
secrets.
You
might
use
a
service
account
that
can
just
see
everything
to
do
the
observation
and
then
differentiate
rights
by.
B
Yes,
yeah
I
guess
my
instinct
in
the
near
term
is
if
that
would
be
something
like
to
delegate
to
a
poll
sort
of
model,
partly
because
then
people
kind
of
have
a
an
easy
choice,
and
partly
also
because
if
I
was
to
have
a
lot
of
watches,
it
would
be
probably
make
more
sense
that
I
would
have
those
watches
locally.
So,
if
I
had
a
ton
of
connections
they
weren't
subject
to
the
vagaries
of
like
connectivity
between
federation
cluster,
they
would
be
local
yeah.
B
F
A
A
A
D
B
The
testing
is
pretty
similar
to
what
was
done
in
Federation
v1
and
that
there's
kind
of
the
same
or
a
test
or
framework
call,
instead
of
targeting
a
single
resource.
It's
targeting
like
the
template,
the
placement
and
the
override
for
a
given
type,
the
one
sort
of
enhancement
that
I'm
working
on
and
in
concert
with
state
testing
is
creating
end
end
tests.
That
can
be
like
if
they're
provided
and
just
indent
test
that
if
it's
provided
a
cube
config,
the
tests
will
run
against
that
you
can
fake.
B
But
if
you
don't
provide
a
cube
config,
you
will
actually
deploy
an
integration
test
cluster
very
similar
to
whatever
the
integration
tests
currently
use.
But
the
goal
there
is
maybe
removing
the
idea
of
an
integration
test
and
moving
towards
an
integration
test
that
can
run
against
test
manage
cluster
or
unmanaged
cluster,
one
for
which
you
configures
just
applied
and
the
work
and
say
testing
its
ongoing
is
kind
of
defining
a
common
interface
for
instantiate
engaging
test
clusters.
So
that
would
be
possible
to
say.
B
B
For
anybody
who
isn't
familiar
with
that
for
Federation
v1
as
part
of
sort
of
modularizing,
the
the
synchronization
mechanism
and
using
adapters
to
integrate
with
the
underlying
types
kind
of
that's
the
implementation
side,
but
then
there's
kind
of
a
test
side
as
well.
That
uses
those
same
adapters
to
drive
like
crud
testing
in
the
API
can
I
create
a
resource?
Does
it
propagate
the
way
I
expect
you
know,
I
can
I
update
it.
Does
it
propagate?
B
Can
I
delete
it
as
the
deletion
propagate,
and
so
it
means
that
when
you
define
a
new
federated
type,
you
define
the
adapter
and
you
register
it
like
the
way
that
the
controller's
work
is.
You
actually
register
the
adapter
and
the
kind
and
the
the
controller
manager
actually
instantiates
a
controller
from
every
single
register
type,
and
so
the
testing
works
very
similarly,
where
it'll
actually
iterate
through
all
the
registered
types
instantiate.
The
control
are
necessary
against
the
API,
that's
already
deployed
as
part
of
fixture
and
then
it'll
run
a
test.
B
That's
key
to
that
specific
type,
and
so
the
part
of
the
goal
here,
like
there's
kind
of
two
goals
in
development
code
of
sight
thing,
one-
is
creating
something
that
works
with
like
like
flushes
out
the
primitives
that
have
been
discussed,
but
the
other
part
is
making
like
lowering
the
cost
of
maintaining
by
implementing
and
maintaining
a
new
federated
type.
We
don't
have
to
create.
You
know
these
new
types
and
then
create
a
new
controller
and
new
tests.
B
E
A
Seems,
ok,
just
one
last
question
about
that.
So
you
mentioned
that
you
are
working
with
Cassandra
to
have
some
kind
of
mechanism
that
managed
blisters
could
be
brought
up.
For
example,
you
said
maybe
a
darker
and
darker,
or
something
like
that
so
and
that
is
only
the
interface
defined
to
bring
up
such
a
cluster.
That's
what
the
test
and
how
you
might
be
working
on
or
or
something
else.
B
The
starting
point
is
really
just
to
find
the
interface
that,
like
a
test
framework,
would
use
to
bring
up
like
a
test
matters.
Cluster
and
they'll
be
some
prototyping
work
around
like
actually
integrating
something,
maybe
darker
and
darker
based
but
I
mean
really
it's
not
super
complicated.
The
motivation
for
this,
as
the
document
I
think
I,
think
that's
included
in
the
document.
I
just
link
to
you.
E
B
B
D
B
So,
if
anybody's
interested
in
in
discussing
this
further
I
mean
it's
pretty
high-level
today,
the
goal
is
just
laying
the
groundwork
for
out
of
tree
efforts
to
have
common
test
infrastructure.
Currently,
if
you're
in
kubernetes,
there
are
a
lot
of
there's
there's
a
test
framework,
there's
lots
of
ways
to
bring
up
clusters.
B
Is
how
do
we
actually
how
we
have
shared
infrastructure
for
testing,
so
the
first
step
is
creating
kind
of
a
way
of
having
test
managed
clusters
have
an
interface
where
tests
can
sort
of
not
care
about.
The
implementation
of
you
know
how
it's
brought
up
and
all
they're
gonna
get
is
like
a
cute
can
fake
and
maybe
an
indication
of
how
the
cluster
is
configured
longer
term.
Maybe
tests
would
define
what
characteristics
they
require
of
a
cluster.
B
So,
for
example,
if
I'm
running
a
test
that
really
only
requires
an
API
I
would
have
some
sort
of
annotation
on
the
test.
That
indicated,
you
know,
requires
:
api
or
something-
and
I
would
have
another
test
that
you
know
I
require.
Were
clones.
I
actually
need
nodes
that
are
gonna,
run
pods,
but
I
run
those
tests
together.
B
The
other
thing
that
that
I'm
kind
of
experimenting
with
in
the
context
of
Anor
is
you
have
managed
and
unmanaged
cluster
well,
maybe
I
would
like
to
run.
We
ran
into
a
problem
trying
to
debug
or
like
with
our
back
as
Ivan
mentioned.
It's
possible.
We're
going
to
try
to
do
since
he's
running
into
trouble.
Is
we're
going
to
run
against
an
unmanaged
cluster,
but
we're
going
to
disable
the
controller
manager
for
Federation
and
actually
run
those
in
memory
and
leave
flag?
B
So
essentially,
if
you
wanted
to
run
the
controller
for
there
are
no
secrets
in
memory,
you
could
deploy
like
Federation,
but
you
could
disable
like
the
secret
controller
as
part
of
the
deployment,
and
then
you
could
run
it
as
part
of
the
test
in
debug
as
it
would
have.
It
would
be
very
trivial
to
do
be
something
that
you
know
anybody
can
do
rather
than
having
to
like
hock
code
or
configuration.
B
But
the
goal
is
kind
of
striking
a
balance
between
tests
that
run
really
well
in
continuous
integration
and
automatically
and
tests
that
are
useful
for
developers
that
can
service
kind
of
jigs
for
debugging,
because
I'm
kind
of
I
want
both
to
work,
but
as
a
developer,
I
really
want
things
that
are
useful
to
me.
I,
just
don't
I,
don't
want
to
just
throw
tests
into
the
cloud
and
have
them
run.
B
I
want
to
actually
be
able
to
run
locally
and
introspect
on
what's
going
on,
and
so
my
goal
and
participating
here
is
to
make
sure
that
we
do
have
a
developer
focused
results
rather
than
just
being
something
that
you
know,
they're
going
to
be
wrong,
like
the
work
that
testing
for
does
is
amazing,
like
Betty's
lives
and
dies
by
at
CI,
but
we
we
definitely
need
more
of
it
all
her
focus
on
the
testing
side.
So
it's
not
just
writing
tests,
but
actually
using
them
as
a
tool.
B
A
B
I'm
just
gonna
experiment
with
integrating
with
an
existing
link,
darker
and
darker
solution,
because,
as
per
the
the
document,
there's
kind
of
a
there's,
some
sort
of
how
do
they
actually
remove
that?
Oh,
no
there's
like
a
test
cluster
config
like
how
you
would
define
the
configuration
for
provisioning
and
cluster
and
we
kind
of
have
to
flush
out
something.
That's
common
across
different
implementations.
I
mean
things
like
number
of
nodes
is
pretty
easy
but,
like
you
know
which
controllers
do
I
want
to
run
and
what
configuration
do
I
want
them
to
run
with.
B
We
need
it
a
common
way
of
representing
that
across
different
mechanisms
for
deployment
test
clusters.
So
my
goal
is
kind
of
just
trying
to
work
through
deploying
like
something
darker
and
darker,
based
and
figuring
out,
like
what
configuration
representation
will
be
useful
to
do
that
and
then
try
to
and
I'll
try
to
apply
it
to
different
mechanisms.