►
From YouTube: Kubernetes Federation WG sync 20180912
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Just
kind
of
getting
a
feel
from
the
group
if
people
have
use
cases
that
they're
interested
in
around
excuse
me
around
doing
some
basic
programming
of
load
balancers
in
a
manner
similar
to
how
we
have
external
DNS
reprogramming
DNS,
where
we
have
started
to
get
some
interest
from
our
customers
in
doing
that,
and
I
I
just
wanted
to
get
a
feel
from
the
group.
Whether
this
is
something
anybody
had
thought
about.
Yet,
if
folks
had
you
know
specific
requests
from
people
that
were
interested
in
using
Federation,
etc,.
A
C
Yeah
so
I
I
think
the
the
most
the
the
most
fundamental
essential
use
case
is
given
a
service
that
is
deployed
in
multiple
clusters
that
are
are
fronted
by
a
single
load
balancer
somewhere
to
reprogram
that
load
balancer
with
a
current
set
of
end
points
that
correspond
to
little
balancers
or
externally,
addressable
IPs
for
the
clusters,
where
the
service
is
deployed,
or
rather
for
the
service
in
each
cluster.
Does
that
make
sense.
A
A
A
Okay,
I
see
that
I
I
mean
so
see
this
within
our
team.
At
at
our
group,
we
had
had
multiple
discussions
about
this
earlier,
and
this
keeps
coming
back
so
I
just
put
some
thoughts
or
conclusions
that
we
had
had
when
we
had
these
discussions.
It's
something
on
these
lines,
so
number
one
point
is
that
we
do
not
have
a
I
mean
each
cloud
cloud
provider
has
a
different
mechanism
or
different
API
to
do
this.
A
Provisioning
of
programming,
okay,
so
a
load
balancer
cancel
from
any
cloud
provider
can
certainly
be
configured
in
such
a
way
that
it
can
direct
traffic
to
any
known
public
location.
So,
for
example,
the
load
balancer
provision,
then
could
be
plowed
can
direct
traffic
to
Amazon
clusters
also
or
to
on
temperatures.
Also
given
the
earth
of
the
Clery
table.
A
There
also
we
run
into
I
mean
there
are
problems
with
that,
also
which
we
couldn't
wrap
our
heads
around.
So
this
is
needed
at
kavya.
We
certainly
need
this
and
we
have
use
cases
where
applications
or
the
workloads
which
are
deployed
in
multiple
clusters.
A
central
or
a
single
endpoint
is
needed
to
reach
those
applications,
but
how
we
have
not
been
able
to
solve
it.
So
we
haven't
come
to
think
of
a
good
solution
for
that.
Yeah.
C
It's
really
convenient
that
for
DNS
we
already
have
this
like
independent
building
block
in
external
DNS.
That
knows
how
to
program
these
different
api's,
yeah
I
agree,
there's,
definitely
the
there.
The
the
challenges
that
you
highlighted
or
Fon
I
think
are
very
real
at
this
point,
I
would
say:
I
would
characterize
like
our
thinking
as
we
want
to
understand
exactly
what
use
cases
people
have
if
they
have
them
and-
and
that
is
probably
necessary
information
to
inform
any
further
steps.
C
Yeah
I
I
think
there
are
a
lot
of
different
technologies
that
might
solve
parts.
You
know
parts
of
this
problem
I
definitely
think
like
to
to
go
back
to
something
that
earphone
mentioned
yeah.
You
could
also
look
at
building
like
a
software
load.
Balancer
that
was
independent
of
you
know
any
particular
cloud
platform.
I
do
think
there.
There
are
going
to
be
like
enterprise
users
that
have
existing
like
load,
balancer
appliances
and
infrastructure
that
they
ideally
would
want
to
use
with
Federation.
A
D
A
Know,
rather
than
rather
than
actually
networking
across
multiple
clusters,
the
most
simplified
use
case
for
this
is
that
a
user
has
deployed
or
or
whichever
category
of
user
has
deployed,
workloads
across
multiple
pleasures.
They
can
be
simple
replica
based
or
closed
and
Deepika's
don't
talk
to
each
other,
but
for
the
users
of
that
application
they
need
to
reach
to
that
application.
So
one
mechanism
is
that
if
you
have
said
three
clusters,
you
provide
three
different
end
points
and
it's
the
user
users
choice
which
end
point
it
wants
to
connect.
A
That's
not
a
good
solution,
so
a
good
solution.
As
of
now
as
of
today.
What
what
happens
is
that
the
application
maintainer
x'
they
deploy
a
separate
load
balancing
peer
somewhere.
It
might
be
outside
the
token
Etta's
clusters
it
might
be
outside
the
purview
of
kubernetes
itself
separate
load.
Balancing
here
is
it
deployed
as
an
application
or
a
provision
through
infrastructure,
and
that
is
used
as
a
endpoint
to
reach
out
to
those
applications.
A
C
A
E
To
add
to
the
use
case
around
having
a
single
load,
balancer
endpoint,
that
would
then
target
multiple
endpoints
that
are
distributed
in
different
clusters.
I
think
also
of
interest
would
be
as
safe
users
have
use
cases
around
being
also
routed
to
the
nearest
cluster.
That
would
be
able
to
satisfy
that
request.
The
nearest
healthy
clusters
that
load
balancer
would
also
have
to
maintain
some
sort
of
mechanism
to
determine,
like
least
hops,
to
clusters,
as
well
as
determine
like
the
health
state
of
those
endpoints
in
those
clusters.
A
Yes,
that's
this
correct
Ivan,
so
there
are
mom
I
mean
all
the
other
load.
Balancing
you
can
see.
Features
can
be
applied
to
this
beyond
I
mean
after
the
I
mean
if
you
have
a
solution
for
the
first
use
case.
After
that,
whatever
load
balancing
features
could
be
there,
they
could
be
like
how
to
load
balance
and,
as
you
mentioned
geographically
nearest
or
a
minimum
for
replicas
should
be
released
or
the
cluster
should
be
dispersed.
All
those
things
can
then
be
applied,
like
all
those
use
cases
applied
to
this.
A
You
can
say
that
yeah
and
actually,
as
of
today,
some
cloud
providers
provide
a
mechanism.
For
example,
if
you
see
the
MCI,
the
multi
closure
ingress
uses
the
Google
load,
balancer,
the
Google
provision
load
balancers,
and
they
have
lot
of
these
features
and
the
they
understand.
Multiple
clusters
means
that
multicast
ingress
can
lose
some
of
these
load
balancing
features
and
program.
The
load
balancer
such
that
a
single
application
can
be
reached
through
this
Google
load
balancer
to
the
different
clusters.
A
A
A
A
C
I
think
so
I
feel
like
it's
been.
It's
been
quite
a
while,
since
we
released
a
new
version
and
we've
done
a
lot
of
work
that
some
of
which
is
still
in
progress,
you
know
we
we've
added
the
DNS
facility
and
and
work
to
make
the
make
Federation
playable
at
the
level
of
a
namespace
I
we
so
to
be
perfectly
transparent.
C
My
team
is
working
on
a
Developer
Preview
for
OpenShift
3.11,
which
is
coming
up
pretty
soon,
and
we
would
like
to
use
the
most
up-to-date
version,
which
means
that
I
work
very
interested
in
releasing
a
new
version
soon.
After
all,
the
namespace
changes
are
finished.
I
wonder
how
others
are
feeling
about
that.
A
Yeah
I
mean
I,
think
we
didn't
discuss
about
this
in
last
last
meeting,
I,
guess
and
I
think
what
you're
saying
is
reasonable.
Okay.
So
what
we
took
a
section
item
in
that
meeting
was
that,
whichever
features
like
one
of
us
has
to
create
a
milestone,
I
I
simply
skip
I
mean
that
simply
stigma
life
anyways.
We
can
take
that
as
an
action
item
that
posters
meeting
will
create
a
milestone
and
tag
PR,
so
issues
which
need
to
be
part
of
that
milestone
and
then
we
can
track
it.
F
F
A
B
B
So
yeah,
because,
because
in
out
dot,
all
those
who
release
women
into
claims-
and
we
also
has
some
new
features
and
if
people
want
to
test
those
new
features
and
Timmy
want
some
examples
to
test.
So
I
think
that
it
will
be
great
if
we
can
add
example
that
introducing
in
this
release,
okay,
mix-ins
and
also
a
long
question
so
Todd.
We
have
some
plan
for
how
to
release.
E
C
A
B
Yeah
and
also
another
question
is
I
think
that
we
also
have
some
discussion,
for
we
want
to
update
the
latest
the
talk
to
reflect
it
to
the
master
branch
and
the
court.
We
don't
want
you
the
latest
tag
to
map
to
the
latest
release,
but
what
one
should
Mattie
to
the
latest
to
the
to
the
master
branch?
C
B
C
To
go
into
just
a
little
bit
more
detail
so
when,
when
we
push
when
new
commits
are
made
to
the
master
branch,
we
rebuild
the
canary
tag
when
we
tag
a
github
Road.
When
we
tag
a
release
like
when
we
do
a
github
release,
it
pushes
a
tag
with
the
name
of
the
release
to
the
github,
repo
and
Travis
picks
up
that
new
tag
and
we'll
do
an
image
build
for
that
specific
tag
so
like.
If
we
merge
a
pull
request
to
master.
C
A
A
C
C
C
So
when
and
and
to
add
a
tiny
bit
of
missing
detail
that
I
should
have
added
before
when
when
we
push
a
new
tag
to
the
repo.
So
when
we
tag
ODOT
Travis
will
rebuild
it,
we'll
build
an
image
with
tag
0.2
and
it
will
update
the
existing
latest
tag
to
point
to
the
same
shop.
So
latest
moves
when
we
release
new
versions
canary
moves
when
we
merge
stuff
to
master.
E
B
A
B
B
So
so
here
they
proposed
like
that.
I
think
that
we
should
have
a
to
level
our
back,
so
the
first
layer
is
host
level
are
back
and
salaries.
Member
cazilou
Iraq,
so
basically
for
the
host
of
cluster
level
are
back
and
we
need
to
check
if
the
user
is
able
to
create
a
view
of
exactly
resource
it.
You
know
specific
cluster
and
since
our
bag
is
handled
very
hostile
constant.
B
F
F
Like
your
rationale
for
the
host
Buster
stuff
makes
sense
if
you
want
to
have
differential
permissions
for
people
interacting
with
Federation,
so,
for
example,
a
user
that
can
you
know,
create
a
federated
deployment,
can't
necessarily
join
a
cluster
that
makes
sense.
Well,
it's
not
making
sense
to
me
is
when
you're
talking
about
a
member
cluster
RDoc.
There
are
two
interactions
of
federation
as
it
exists
in
v2,
one
is
joining
a
cluster
which
involves
creating
a
service
account,
setting
up
the
permissions
etc,
and
the
second
is
all
the
controllers
use
that
service
account.
F
B
F
I'm
not
disputing
the
possibility
that
consumers
of
Federation
might
want
to
have
our
back
permissions
separating
people
joining
clusters
versus
manipulating
Federation
primitives,
so
I'm
not
arguing.
That's
a
reasonable
case.
It's
more
that
when
you
talk
about
our
back
at
the
member
cluster
level
and
I,
don't
really
I!
Think
it's
technically
really
difficult
and
I
don't
really
see
the
point.
If
I
can
manipulate
a
Federation
primitive
at
the
level
of
the
Federation
control
plane,
why
do
I
need
to
police
thought
of
number
cluster
level.
B
Well,
so
much
for
our
country
comment
is,
if
you
want
to
add
a
new
new
node
into
the
communities
cluster
and
then
the
corrals
need
to
hunson
be
correct.
Real
hasn't
check
to
see
if
it
is
able
to
draw
on
the
crimson
platform,
so
contact
for
Federation
I
think
that
maybe
we
also
need
to
have
to
actually
as
I'm
checking
to
see
if
a
member
class
release
it
will
draw
the
Federation
control.
One.
B
F
Administered
men,
who's
joining
the
cluster,
because
the
Federation
control
plan
expects
to
have
clustering,
then
yeah
the
namespace
twerk
sort
of
narrows
that
focus
so
that
if
there
was
a
desire
for
like
these
users
are
gonna,
have
you
know
access
to
these
clusters?
And
these
are
gonna-
have
access
to
these
clusters.
In
the
namespace
case,
there
is
a
possibility
of
restricting
Federation
to
a
single
namespace.
That
has
permission
only
to
that
namespace
and
so
I
mean
it's
kind
of
in
the
nature
of
the
way
cube.
Api
machinery
works.
F
You
either
have
all
namespaces
or
one
namespace,
but
it's
really
hard
to
do
multiple
namespaces,
just
the
way
that
informers
work
and
so
effectively
like
it's
either
Federation
Federation
on
the
namespace
level
and
people
can
run
like
each
their
own
controllers
in
their
own
name
spaces
or
just
a
global
thing.
The
side
affected.
This
is
that
we
don't
really
have
to
worry
about,
like
granular
are
about
controlling
the
number
clusters.
B
Yeah,
but
how
long
session?
So
that's
we
have
when
you
here
and
the
other
is
able
to
create
resources
in
custom
one,
but
Susilo
is
not
able
to
create
resources
in
the
namespace
of
names
with
one
in
castle
one.
So
after
sis,
you
learn,
create
a
federal
departments
using
the
Federation
control
panel
and
after
the
blue
castle
was
sent
to
castle
one,
and
we
are
not
able
to
check
look.
The
other
is
able
to
quit.
Resources
means
to
use
a
lot
so
right.
F
That's
kind
of
a
limitation
of
how
Federation
is
implemented
and,
like
I
said,
like
the
way
that
informers
work
means
it's
all
or
nothing.
You
either
have
single
namespace
or
all
namespaces,
and
so
what
this
means
is,
if
he
user
need
selective
access,
they
really
need
to
like.
If
you
want
to
enforce
selective
access,
then
going
with
the
namespace
based
employment,
it's
pretty
much.
F
Your
best
option,
like
we
could
add,
like
we
could
update
the
controller's
so
that
they
could
operate
on
a
per
namespace
basis,
but
it
would
add
a
huge
amount
of
complexity
and
it
it
doesn't
really
solve
the
problem
any
better
than
just
running.
You
know:
controller
per
namespace
like
there's
a
certain
amount
of
overhead
associated
with
that,
but
I
think
we
really
need.
F
We
need
definitive
user
like
demand
for
selective,
like
permissions
before
we
go
and
add
a
huge
amount
of
complexity
of
the
sink
controllers
to
deal
with
our
back
in
the
way
that
you're,
suggesting
and
I
guess
said.
I'm.
If
you
haven't
read
the
user
like
I'm,
trying
to
remember
what
the
document
is
called
and
I'll
fish
it
up
and
post
it
to
you
over
slack,
if
you
haven't
seen
it,
but
the
idea
of
like
impersonating
a
user
to
the
underlying
clusters
and
having
some
way
of
federating
authentication
and
I
mean
there's.
C
So
a
couple
things
I
would
say
just
like,
let's
all
be
clear
that
like,
if,
if
you
want
to
implement
this
type
of
our
back
mechanism,
you
could
implement
this
with
like
a
validating
lab
hook.
I
will
say,
though,
that
I
don't
I,
don't
know
that
this
is
going
to
match
actually
a
permission,
setup
that
a
lot
of
customers
have
for
so
and
then
I
also
have
some
background
in
dealing
with
issues
of
like
identity
between
kubernetes
and
external
systems.
C
That
make
me
think
that
I
I'm,
not
sure
that
that
this
set
up
as
suggested
would
be
especially
usable.
So
when
I
see
this,
what
I
think
of
is
like
it
sounds
like
what
you're
proposing
is
that
you
add
basically
something
in
a
mission
that
that
does
a
subject
access
review
check
using
the
identity
of
the
like
the
the
identity,
that's
creating
the
Federation
resource
in
the
host
cluster
for
Federation
against
the
target
clusters
and
the
that
identity
is
not
what
Federation
is
going
to
use
to
actually
create
it.
C
Don't
think,
there's
a
reason
to
assume
that
they're
gonna
be
the
same,
so
this
might
match
like
a
deployment
plan
that
that
some
folks
might
have,
but
I
don't
think
this
would
be
used
would
be
universal
in
terms
of
how
people
expect
our
back
to
work.
Does
that?
Does
that
make
sense
like
that
you're?
You
would
be
performing
this
subject:
access
review
check
using
a
different
identity
from
the
thing
that's
actually
gonna
go
and
create
it.
C
You
could
input
mean
you
could
implement
this
on
your
own
and
set
up
a
validating
web
hook
for
Federation
resources.
If
you
wanted
to
there's
there's
nothing
too,
there
is.
There
is
no
reason
not
to
do
that.
If
this
is
something
that
you
want
to
build
for
yourself
to
use,
but
I
don't
know
that
this
would
be
a
match
for
how
people
actually
expect
our
back
to
work.
In
those
cases,
yeah.
B
F
Arabic
for
the
Federation
control
plane
makes
sense,
I
mean
you
can
alternate
you.
Can
I
RX
s,
you
can
you
can
create
Federation
primitives
in
any
given
namespace
and
then,
if
you're,
using
like
a
global
like
if
you
join
any
cluster
globally,
then
the
arc
situs
is
on
the
Federation
side.
The
controllers
would
still
have
global
access
and
could
do
whatever
they
needed
to
do
in
the
member
clusters.
But
if
you
could
like
limit
access
to
the
perimeters
on
the
host
cluster,
then
you
effectively
control
access
to
the
underlying
clusters.
Yeah.
F
B
Yeah
but
I
think
that
maybe
we
we
still
need
to
have
this
mission
controller
to
manage
clusters
in
different
instances,
because
the
cancer
registry
already
make
the
clusters
as
names
based.
So
I
think
that
we
we
can
leverage
suspicions
to
enable
that
the
clusters
can
be
created
in
different
versions.
So.
C
If
you
want,
if
you
want
to
so
to
be
really
clear,
the
if
you
deploy
Federation
in
a
namespace
way
like
Federation
will
handle
one
namespace.
So
if
you
want
to
to
provide
Federation
type
services
to
some
namespace
foo
you've
run
Federation
in
namespace
foo
configured
to
look
at
namespace
foo.
If
you
want
it
in
in
namespace
bar,
you
run
it
there
too.
So
there
we
can
run
on
a
namespace
by
namespace
level.
You
just
run
an
additional
controller
for
each
namespace
that
you
want
to
handle
and
you
can
run
it.
C
You
can
run
it
in
that
namespace.
If
you
want,
you
can
run
it
outside
that
namespace
and
configure
it
to
look
at
the
namespace
that
you
want
it
to
handle
and
give
it
permission
to
do
so
right,
I,
I,
think.
Eventually
we
could
look
at
giving
groups
of
namespaces
to
a
single
controller,
but
I
don't
think
we
need
the
complex
for
now.
I.
F
Mean
being
able
to
I
mean
I'm,
assuming
the
idea
is
that
oh,
like
this,
this
namespace
is
where
this
organ
is
it
like
this
department,
is
going
to
add
their
clusters,
and
this
namespace
is
where
this
department
is
going
to
add
their
clusters
and
I
mean
I.
Just
don't
see
cluster
addition
being
like
in
the
near
term
that
big
a
deal
like
having
it
been.
F
Do
it
doesn't
seem
especially
costly,
and
the
challenge
is
that
if
you
have
like
different
departments
clusters
as
part
of
one
Federation,
how
do
you
control
access
to
those
clusters
like
today?
If
a
cluster
is
available
to
the
Federation,
anyone
can
send
data
to
it
and
I
think
what
you're
suggesting
is
that
you
wanted
to
be
selective.
F
You
know
these
users
can
add
these
clusters
and
then
only
those
users
Confederate
to
those
clusters
and
I
think
that
really
fits
better
with
like
a
with
a
namespace
deployment
model,
where
you
just
say:
Department
X,
you
know,
has
namespace
X
they're
gonna
run
their
Federation
and
they're
gonna
target.
You
know
that
namespace
and
those
member
clusters-
I,
guess
that
doesn't
really
I
don't
know.
Maybe
that's
not
what
you're
looking
for.
F
If
you
want
to
be
able
to
have
those
users
having
complete
control,
they
might
actually
need
separate
control
planes,
because
it's
really
hard
to
sort
of
like
Federation,
really
wasn't
designed
to
be.
Like.
Oh
yeah,
there's
going
to
be
a
differential
level
of
access
to
the
clusters
available
to
the
Federation.
F
It's
you
can
control
the
resources
that
go
to
the
clusters
like
on
a
namespace
basis,
but
the
idea
that
you
would
have
like
department
acts
would
have
these
clusters
and
only
they
Confederate
that
that's
not
really
a
model
that
would
be
easy
to
serve
like
I.
Don't
think
it's
just
a
matter
of
our
back.
It's
a
matter
of
the
way
the
controllers
are
architected.
Is
this
a
use
case
that
you
see
is
important
yeah
like?
F
What
I'm
suggesting
is,
if,
if
you
want
to
have
like
cluster
level
Federation-
and
you
want
to
have
different
departments
having
entirely
disjoint
sets
of
clusters
at
their
federating
to
that,
really
suggest
that
you
run
a
control
plane
per
department,
not
they
have
a
control
plan
that
serves
all
departments
right
because
it
doesn't
have
to
be
like
a
host.
Cluster
doesn't
have
to
be
a
super
big
thing.
It
could
be
simply
like
you
know
an
API
server
like
it
doesn't.
It
doesn't
actually
even
have
to
run
workloads.
If
you
don't
want
it.
Someone.
E
F
In
that
case,
I
mean
you
could
run
an
API
server
on
top
of
kubernetes.
That's
that's
how
Federation
v1
ran
so
you
could
have
you
know
this
is
the
control
plane
for
a
department
acts?
This
is
the
control
plane
from
Department.
Why
they're
just
running,
as
you
know,
controllers
and
on
a
kubernetes
cluster
yeah.
B
B
B
F
I
mean
if
that
works.
That's
great.
That's
kind
of
the
use
case
that
the
namespace
work
was
trying
to
approach
was
allowing
sort
of
a
level
of
granularity
of
Federation,
so
the
PRS
for
that
or
I
think
they're
technically
complete
they
just
they
need
review
and
getting
merged
when
we
need
to
arcs
for
it
yeah
I
think
most
of
the
work
is
done.
B
F
We
don't
have
explicit,
you
know
our
about
configuration
for
this,
but
I
mean
I.
Think
that's
an
easy
enough
step
just
to
say
you
know
you
have
if
you
give
a
user,
for
example,
cluster
admin
which
I
think
is
necessary
just
to
a
namespace,
though
I
think
that's
sufficient
and
so
I
mean
you
could
have
differential
permissions
where
some
people
could
add
clusters
and
other
people
could
but
I
mean
that's
kind
of
an
implementation
detail.
It
would
really
depend
on
the
user
how
they
wanted
to
configure
that
and
probably
out.
E
F
B
F
E
E
F
You're
raising
the
point
that,
having
not
just
documentation
of
how
to
use
it,
but
maybe
the
why-
and
you
know
you're
gonna
use
this
here
or,
if
you're
thinking
about
this
kind
of
problem,
here's
a
solution.
Your
problem
versus
this
is
just
here
at
how
you
deploy
it,
manage
based
fashion,
so
I
might
be
welcome
your
reviews
on
whatever
dark,
updates,
I
end
up,
pushing
and
contribution.
If
you
feel
it's
locking-
and
maybe
you
can
describe
some
of
you
use
cases
and
pointing
to
a
configuration
involved.
F
B
A
One
more
thing:
actually
I
I
mean:
can
you
just
brief
about
your
walk
and
tea
arts
like
until
yesterday,
I
saw
a
couple
of
years
which
do
not
much
label,
so
I
was
not
sure
if
I
should
necessarily
go
and
check
them
so
I
mean
if
you
think
this
give
a
brief
I
it
be
easier
for
me
to
look
at
anything.
You
understand.
F
So
I
mean
the
reason
all
of
them
were
put
in.
Working
in
progress
was
because
there
were
test
failures
that
were
coming
out.
They
were
related
to
the
dependency
flake
that
shashi
have
reported
and
everybody
was
experiencing
for
quite
a
while,
and
there
were
just
some.
There
was
all
kinds
of
bugs
in
the
testing
that
were
only
being
revealed
by
deployment
testing
and
coming
out
when
I
was
testing
twice.
F
So
as
of
216,
which
is
supporting
the
playing
name,
space
control
plant,
which
is
ready
for
review
there,
it
actually
runs
two
sets
of
tests
that
actually
deploys
in
a
cluster.
Scoped
employs
a
cluster
scope,
Federation
runs
the
tests,
and
then
it
deletes
that
and
then
it
plays
a
main
spaced
Federation
and
it
runs
the
tests.
F
F
Yeah
so
I
mean
hopefully
the
if
you
review
that
p.r
you'll
get
a
sense
of
like
what's
involved
in
deploying
namespaces
relatively
trivial.
You
just
pass
Flags
deploy
Federation,
and
you
have
to
make
make
sure
that
you're
not
already
running
in
cluster
quickscope
Federation.
Is
they
will
you
know,
understandably,
step
on
one
another?
F
F
And
it
also
sort
of
impacts
deployment
like
the
operator
work
that
we're
doing
at
Red
Hat
to
be
able
to
preview
Federation
to
our
customers.
There's
been
discussion
about
like
the
need
to
have
the
canonical
source
for
configuration,
rather
than
just
like
passing
flags
around
or
editing
deployments,
I
think.
A
A
F
F
F
You
still
need
cluster
and
Men
just
because
of
the
way
the
controller,
the
Federated
type
config
controller
was
architected,
and
so
230
rewrites
the
Federated
type
config
controller
to
not
use
shared
informers,
which
you
can't
really
restrict
to
a
namespace
and
and
then
when
it
deploys
that
it
makes
sure
that
everything
is
just
in
the
namespace
so
that
that
Rosa
represents
the
state
that
I
would
be
comfortable,
giving
to
a
user
saying
you
could
safely
deploy
this
on
a
production
instance.
It's
not
going
to
destroy
things.
A
C
C
A
So
I,
that's
what
I
was
trying
to
do.
I
did
that
last
week
this
week,
I
didn't
go
through
the
peers
again
so
I'll
go
through.
My
only
concern
was
the
growing
number,
so
I
just
saw
I
think
today
there
are
some
17
open
Fiats,
which
is
which
it's
okay,
but
I
mean
if
there
are
some
PS
which
need
the
necessary
just
waiting
for
the
viewers
or
somebody
through
a
blog
and
a
single
entity.
We
can
mention
it
over
there.
That's
what
my
vision
is
I
would.
F
I
would
also
call
out
260,
which
fixes
I,
created
and
I'm
trying
to
fix
them,
did
great
sex,
so
mister,
going
for
pointing
out
that
I
actually
didn't
quite
finish
the
job.
Hopefully
it's
in
good
shape.
Now
we
need
to
merge
that
ASAP,
because
tree
is
actually
an
inconsistent
state
where
there's
no
overrides
are
handled.