►
From YouTube: Kubernetes Federation WG sync 20180328
Description
See this page for more information: https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md
A
A
B
A
D
B
D
A
D
D
D
And
and
I
I
believe
like
Marie,
would
also,
on
the
same
note,
that
I
think
for
complicated
cases
and
the
actual
implementation.
We
can
actually
be
updating
the
spec
all
the
document
11
to
the
spec
by
in
or
after
implementation,
because
we
might
find
differential
and
challenges
during
implementation,
which
cannot
really
be
fleshed
out
in
beginning
it
yeah.
D
D
D
Come
up
at
some
common
ground
in
terms
of
the
implementation,
for
example,
Maru
has
taken
three
resources
and
implemented
the
simply
workflows,
as
as
described
in
Milton
stock
like
there
are
different
possible
workflows
which
can
be
achieved
in
addition
or
using
the
condition
API.
So
the
rules
has
implemented
manifest
until
are
closed.
D
He
envisions
that
use
case
and
I
actually
have
concrete
use
cases
from
the
marketing
team
of
far
away
and
that's
what
they
are
looking
at,
yeah
yeah.
Let
me
complete
just
two
regular
sentences.
So
that's
what
we
are
actually
looking
at
building,
so
one
exercise
which
we
need
to
arrive
at
which
is
like
ASAP
is
API.
Spec
is
not
that
important
it.
D
It
is
I
understand
that
it
is
important
in
terms
of
the
correct
spec
would
mean
the
correct
implementation
of
controllers
and
ease
of
usage
to
the
users
and
and
all
the
relevant
aspects
you
have
about
the
API,
but
one
of
the
more
important
points
to
assess,
arriving
or
reaching
at
the
similar
level
of
functionality.
What
condition
v1
actually
exists
exists
as
of
today
or
maybe
six
months
eight
months
ago,
using
the
new
API
aspect.
D
So,
if
addition
beavin
has
some
extra
set
of
functionality
right,
it
has
some
controllers,
and
it
has
some
feature
set
with
the
controllers.
For
example,
I
gave
that,
for
example,
deployment
controllers
are
replicas.
That
controller
has
that's
an
inherent
feature
that
if
a
deployment
is
created
in
Federation,
it
will
ensure
that
you
need
place
that
was
replicas
at
the
correct
cluster
where
it
needs.
This
is
without
users.
D
F
E
If
that
replicas
had
any
dependency
on
any
other
resources,
your
application
will
just
break
because
there's
no
way
to
declare
that
oh
yeah,
by
the
way,
this
replicas
that
depends
on
the
secret.
And
so,
if
you
move
this
to
this
cluster-
and
you
actually
have
to
make
sure
that
the
secret
has
compatible
like
distribution
rules,
but
scheduling
basically
just
doesn't
take
secrets
into
a
counter.
I
give
it's
resource
based.
So
I
would
just
to
me
that's
a
huge
hole
in
the
utility
of
Federation.
Do
you
want
I.
B
E
About
discussion
and
more
about
Show
and
Tell,
giving
us
an
opportunity
to
make
sure
we're
providing
regular
updates
and
sharing
knowledge
and
a
regular
basis,
and
you
know
cut
it
down
half
an
hour.
It
doesn't
have
to
be
like
laborious
or
anything,
but
move
away
from
collaborating
with
the
intent
of
converging
and
just
more
give
us
an
opportunity
to
find
common
ground
in
the
future.
E
D
Think
it
should
be
okay
and
I
think
we
also
know
who
are
the
who
are
the
engineers
involved
in
this
effort.
So
if
there
is
something
which
we
basically
need
to
communicate,
we
can
reach
out
to
individual
engineers.
Also,
I
mean
what
I
need
to
say
is
I,
don't
see
much,
which
we
maybe
need
to
discuss
with
the
wider
working
group.
D
E
E
Happy
with
toy
it
like
I,
don't
necessarily
think
we
have
to
do
it
more
frequently,
because
I
mean
there
has
to
be
development
scale
up.
You
know
we
actually
have
to
have
stuff
implemented.
That's
that
we
can
show
so
I
would
say
cut
about
every
two
weeks.
You
can
cut
it
back
to
half
an
hour.
If
we
need
more
time,
we
can
always
adjust
that,
but
I
think
just
scaling
it
back.
So
we
have
less
intrusion
on.
E
D
A
D
B
F
A
B
After
after
the
Monday
meeting
we've
been
discussing
on
the
Red
Hat
side,
what
what
we
think
is
the
best
thing
to
do
with
the
Senhora
project
and
we're
all
I
think
on
the
same
page
internally,
a
Red
Hat.
The
folks,
like
you
see
in
this
meeting
from
rahat
that
we
think
it's
very
valuable
to
find
the
right
low-level
API
constructions
for
higher
level
things
to
target,
and
we
would
like
to
move
the
the
Nord
repo
under
the
kubernetes
SIG's
org.
B
B
I
think
that's
totally
fine
I
see
no
reason
why
they
can't
coexist
in
kubernetes
SIG's
and
we're
we're
very
interested
in
continuing
to
add
support
for
new
core
kubernetes
types
to
preneur
and
then
also
eventually
add
support
for
CRD
types,
and
we
think
that's
very
important
to
have
like
primitives
that
other
folks
can
target.
So
I
will
probably
send
out
an
email
to
cig
multi
cluster
today
and
propose
that
we
move
that
to
kubernetes
SIG's
I,
wonder
if
any
folks
have
like
input
or
thoughts
on
that
I.
D
B
E
E
B
I
would
say:
I
would
say
that
that,
in
in
the
near
term,
I
I
think
it's
kind
of
unclear
exactly
what
shape
official
Federation
v2
would
would
take
I'm
most
interested
in,
since
we
now
have
this
kubernetes
SIG's
org
like
we're,
we're
most
interested
in
Red
Hat
and
getting
it
out
of
like
morose
personal
github
account
and
into
a
place.
That
indicates
that
it's
something
that's
a
cig
works
on
I,
think
I.
Think
if
we
wanted
to
rename
it
later,
we
totally
could,
but
that's
something
that
we
don't.
F
E
D
D
D
E
Man,
we're
gonna,
keep
building
on
the
primitives
that
we've
identified
as
being
core
to
a
solution
to
propagate
configuration,
coordinate
configuration
for
multiple
clusters
in
the
absence
of
consensus
and
what
those
are
I
think
the
we
don't
really
want
to
stop
development
or
adopt
it.
You
know
something
that
we
don't
feel
is
tested
or
proven
so
yeah.
B
E
C
Right
right,
yeah,
just
just.
A
B
D
Does
it
does
and
see,
honestly
speaking,
I?
Don't
I,
don't
really
see
that
or
I
personally
speaking,
I
I
wouldn't
really
want
to
diverge,
as
Mel
just
mentioned
that
we
are
diverging
and
I
have
seen
what
work
Maru
and
I
one
has
put
in
if
not
I
I.
Consider
it
quite
useful,
especially
the
testing
path,
part
that
you
guys
have
updated.
You
have
simplified
the
whole
process
really
nicely.
D
So
what
I'm
still
looking
forward
to
somehow
being
able
to
do
whatever
we
are
doing
at
the
same
location?
At
least
yeah
I
have
been
trying
to
take
the
same
thing
that
because
we
do
get
to
review
each
other's
work
and
then
that
comes
out
to
be
better.
That's
what
I
understand
and
if
we
put
maybe
two
repos
doing
similar
kind
of
work
or
similar
stuff
into
the
pendulum
repos
and
both
of
them
in
kubernetes,
saying
all
four.
D
F
D
It
doesn't
matter,
it
can
be
open
source
location,
I
mean
it
ideally
has
to
be
an
open
source
location
which,
if
the
community
is
interested
in,
not
if
it
is
in
collaboration
with
community,
it's
great,
but
they
basically
want
something
done
or
walking
in
say
three
months
timeline
of
time
frame.
So
if
we
can
achieve
arrive
at
that,
I
I
believe
if
you
work
together,
I
mean
this
will
be
faster
for
us.
Yeah.
F
B
I
totally
get
that
and
and
honestly
I
I
think
it's
it's
best
for
us
all
to
collaborate
and
I
would
not
want
us
to
reinvent
the
wheel.
I
honestly
think
that,
like
that,
we're
extremely
close,
maybe
modulo
some
vocabulary.
Choices
on
how
things
should
work
so
III
would
I
would
love
to
have
folks
collaborate
together
and
the
nord
personally,
and
if,
if
we
decide
at
some
point
that
like
we're
on
the
same
page
enough
that
that
there's
some
common
ground
that
we
can
call
federation
v2
even
better.
D
D
D
Think
in
the
previous
meeting
or
some
time
earlier,
Maru
and
I
were
talking
like
we
can
have
different
controllers,
also
under
different
hierarchy
in
the
same
repo,
even
if,
like
you
can
have
two
different
deployment
controllers,
doing
different
things
right,
so
it
might
one
might
be
a
simple
controller
which
is
a
static
reconcile
kind
of
a
stuff.
Another
one
might
be
a
controller
which
does
some
dynamic
adjustments
and
all
that
which
is
a
higher
level.
Abstraction
kind
of
a
thing.
So
doing
this
kind
of
stuff
seems
ok
or
I.
E
Think
that's
fine,
I
think.
The
only
part
that
maybe
is
a
little
bit
of
a
concern,
in
my
mind,
is
how
we
sort
of
if
we're
going
to
have
died.
If
I
mean,
if
we're
gonna
be
working
the
in
in
the
same
repo
and
kind
of
trying
to
converge
on
a
solution,
did
you
worry
a
little
bit
about
having
divergent
API,
but
at
the
same
time?
Hopefully
that
could
just
be
something
we
hash
out.
You
know,
as
part
of
development
I
mean
ya,
know.
E
None
of
the
API
constructs
that
we
have
or
kind
of
Latika
that
me
well,
hopefully
not
I'm.
Yeah
I
mean
some
of
the
discussions
that
we've
we've
had
in
these
working
group
meetings
have
been
a
little
bit
contentious,
so
I'm,
just
kind
of
voicing
like
I'm
hoping
we
can
sort
of
focus
on
the
functional
because
I
mean
I
think
we're
all
kind
of
on
the
same
page
when
it
comes
to
understanding
what
Federation
is
about
and
how
you
have
to
sort
of
work
within
the
constraints
of
kubernetes.
E
To
achieve
you
know
what
you
want
to
do
and
given
that
common
understanding
I
think
we
come
up
with
the
same
API,
it's
more
like.
If
we're
not
considering
the
underpinnings
that
we're
working
to,
then
things
can
diverge.
So
I'm
kind
of
hopeful
that
you
know
that
what
you're
proposing
to
work
in
the
same
repo
will
actually
allow
us
to
converge
in
a
way
the
discussion
did
yeah.
B
B
D
E
E
Mean
that
to
me
this
is
kind
of
implementation
decision.
I
mean
there's
been
some
question,
whether
like
they're
really
advanced
implementation
would
want
to
have
this
intermediate
representation,
but
I
I
think
in
the
short
term,
having
like
a
reliable
propagation
mechanism
that
feeds
off
these
primitive
types.
That
offers
the
opportunity
to
prototype
like
advanced
scheduling
behavior,
without
having
to
worry
about
all
the
logistics
of
propagation
yeah.
At
some
point,
that
means
to
you
know,
encompass
and
sort
of
take
on
the
responsibility
for
propagation.
D
D
D
Some
people
have
reached
out
to
me
individually,
asking
for
specific
stuff
like
stateful
sets
or
hp8
or,
for
example,
recently
I'm
not
talking
about
all
the
times
and
all
the
and
they
have
reached
out
to
me
because
those
documents
were
written
by
by
me.
So
it
seems
that
there
is
some
interest
in
users
and
it
seems
that
there
are
done
the
purse
also,
who
might
be
interested
in
getting
boots,
led
into
some
kind
of
development
effort
or
getting
together.
D
Okay,
so
I
was
like
in
last
meeting
okay,
yesterday's
meeting
Quinton
did
talk
about
that
sick,
Charter
and
some
template,
which
needs
to
be
aligned
to
that
Kickstarter,
template
and
all
this
stuff.
So
he
did
pass
on
those
details
to
me
and
did
ask
to
initiate
their
fo
and
update
the
Charter
and
so,
but
now
Paul,
you
mentioned
that
you
would
probably
send
out
a
mail
today
to
multiplexer
about
requesting
the
movement
of
to
multi
cluster
of
an
interesting
yeah.
B
B
Like
you
can
put
some
what
I'm
writing
so
I'm,
just
I'm
writing
down
a
consensus
that
we
have
a
broad
enough
overlap
on
the
fundamentals
and
that
it's
good
to
develop,
going
forward
the
same
repo
and
then
also
consensus
that
we
will
likely
work
through
implementation
challenges
as
they
come
up
right
and
then
shared
perception.
I
think
that
there
are
developers
interested
slash
eager
in
the
community
developers
in
the
community.
B
G
D
E
I
know
I
would
be
happy
to
have
it
be
called
Federation
I'm
just
worried
about
the
potential
for
confusion
if
kubernetes
Federation
continues
to
exist
in
its
current
location,
I
would
maybe
said,
like
I
would
be
happy
to
have
kubernetes
sig
Federation
so
long
as
to
Burnett
YZ
Federation
is
moved
to
a
location
and
may
be
renamed
to
indicate
it's
v1
or
prototype
status.
So
there's
no
room
for
confusion.
How.
B
The
top
of
the
readme
in
this
repo,
we'll
explain
you
know
the
origin
story
of
Federation
v2
and
say
that
it's
a
work
in
progress
and
disambiguated
from
kubernetes
/
federation
and
then
we
should
add
a
note
in
kubernetes
/
Federation.
That's
like
the
other
side
of
the
same
note
that
points
people
to
kubernetes,
cigs,
/,
Federation
v2.
How
does
that
time?
Yep.
D
I
created
one
repo
named
perdition
v2
under
my
user
and
I
did
some
sort
of
same
exercise
for
deployments
API
over
there
and
then
I
realized
that
we'd
be
doing
so
much
of
replicated
work
in
both
places.
If
we
do
it's
a
pretty,
so
it's
just
better
to
do
it
together.
Whatever
controllers
we
write,
if
we
don't
have
consensus
and
country
controllers,
we
can
probably
name
controllers
something
something
like
deployments
advance
for
deployment,
so
basic.
Something
like
that.
That's
what
my
sedition
is.
Okay,.
F
B
F
B
B
B
E
I
mean
it
seems
like
a
good
idea
like
at
first
but
there's
two
things
that
I
think
shift
my
attention
away
from
trying
to
consume
validation.
Okay,
one
is
the
keeping
it
in
sync
is
kind
of
a
maintenance
burden
like
oh
I
have
to
vendor
this.
Oh
well,
I'm,
you
know
I've
entered
version,
you
know
110
and
now
one
eleven
and
one
twelve
is
that
well,
which
version
am
I
going
to
use?
Probably
I
just
have
to
keep
it.
You
know
whatever
latest
releases,
so
it's
probably
a
minor
issue,
but
the
secondary
issue
is
I.
G
B
B
Maru
and
I
previously
did
not
discuss
this,
but
that's
that's
basically
where,
where
I
was
headed
is
like
as
a
short-term
hack,
you
could
probably
like
build
something
that
you
could
like
post
a
new
resource
to,
and
it
would
just
give
you
a
response
but
said
it
was
valid
or
not
I
think
longer
term.
It
would
be
extremely
useful
to
be
able
to
have
an
a
real
live.
Api
server
offer
up
like
a
validation
endpoint
that
you
could
just
post
a
resource
to,
and
it
would
apply
to
validations
for
that
and
tell
you.
B
E
Much
with
the
caveat
that
was
also
raised,
I
believe
on
Monday,
that
I
mean
in
pod,
presets,
well,
I.
Actually,
I
can't
speak
to
what
problems
you're
facing
in
Service
Catalog,
but
I
mean
I,
create
a
template.
The
template
I
mean
is
currently
how
we're
talking
about
it.
The
templates
pretty
complete
and
we're
not
actually
allowing
override
of
the
particular
very
many
fields.
If
we
start
allowing
more
validation,
like
validation
of
different
fields,
there's
a
possibility
that
the
template
I
would
actually
have
to
accept
the
template.
E
You
know
run
through
the
validator,
but
then
we'd
have
to
accept.
If,
when
we
accept
overrides,
we
have
to
run
the
output
of
combining
those
with
the
validator
and
that
even
even
that,
doesn't
guarantee
us
validation,
because
there
could
be
quota
issues.
There
could
be.
You
know,
specific
validation
on
a
given
cluster
for
whatever
resource,
for
whatever
reason,
so
it
we're
trying
to
make
do
more
work
upfront,
but
we
can
never
be
totally
sure
that
what
we
accept
will
be
valid
in
a
given
cluster
and
I'm.
B
Yeah
I
think
that
that
there's
no
way
to
guarantee
that
we
can,
we
can
equate
in
acceptance
from
a
Federation
API
standpoint,
like
with
a
guarantee
that
eventually,
every
cluster
that
you
put
a
put
a
resource
into
is
going
to
accept
it
for
like
based
on
validations
or
quota
or
any
other
like
type
of
initializer
or
a
mission
controller
that
might
be
registered
in
that
cluster
right.
So
it's
it's
probably
it's
probably
something
that
we
should
design
for
that
like
there
has
to
be
a
really
good
way
to
surface
errors
in
validation
or
other.
B
D
D
Some
other
users
probably
also
did
put
up
some
the
problems.
So,
oh
one
of
the
comments
that
I
read
over
there
is
that
if
so
you
put
this
like,
whatever
I
understand
you
are
talking
about,
there
is
a
separate
server
or
separate
validator
running
in
parallel
with,
say,
kubernetes
cluster.
If
we
talk
in
terms
of
competitors
cluster,
the
validation
will
happen
separately
and
then,
if
it
is
valid
according
to
this
validator,
they
say
the
object
will
be
submitted
to
the
cluster,
but
there
is
no
guarantee
that
will
need
work.
D
G
D
E
D
D
E
Yeah
I
mean,
though
the
versioning
support
in
C
or
D
was
putting
me
at
this
point.
So
I
mean
to
my
mind
it.
It
still
comes
down
to
best
effort,
we're
in
the
case
of
kubernetes
you're,
accepting
API
requests
and
to
the
best
of
your
ability
to
the
best
of
its
ability.
The
API
server
will
say:
oh
yeah,
that's
fine
and
then,
when
it
tries
to
run
like
you
know,
the
pod
that
you
submitted.
It
could
fail
for
any
number
of
reasons.
E
Different
behavior
and
the
representation
that
you
submit
to
Federation
may
not
be
like
it's,
not
the
final
representation
you'll
see
in
the
clusters,
necessarily
so
I'm
just
saying
we
we
do
our
best.
I
really
do
think.
This
is
just
something
where
we'll
develop
strategies
that
will
hopefully
make
the
problem
like
easier
to
deal
with
by
the
user,
but
really
it's
it's
mitigation
strategies
where
we're
not
going
to
come
up
with
some
deterministic
way
to
solve
it.
In
all
cases,
it's
more.
How
do
we
make
this
pain,
acceptable,
yeah,
yeah,.
D
E
D
B
B
Suggested
the
idea
of
like
it
might
not
have
been
on
this
issue.
It
might
have
been
on
another
one
suggested.
The
idea
of
having
a
like
a
dry
run
parameter
that
you
just
add
to
a
API,
create
request.
That
would
turn
it
into
request.
Just
to
do
the
validation
and
I
think
I
think
that
would
kind
of
fit
the
bill.
E
B
B
There
there
are
fairly
low
limits
to
what
you
can
do
with
open
API,
and
so
for
that
reason
you
can
also
write
like
a
validating
web
hook,
that
you
apply
to
a
CRD
type.
So
you
can
do
more
complex
validations
having
having
spent
more
hours
than
I
can
count
in
kubernetes
validation
code.
I'm,
not
sure
that
well,
I'm,
I'm,
quite
pessimistic
about
the
ability
to
to
express
validations
for
complex
types
like
pods
solely
in
open
API.
So
it
seems
like
a
dry
run
would
be
very
generically
useful.
E
And
just
checking
that
issue
that
dario
pointed
us,
it
looks
like
Quinton
and
Daniel
had
a
an
exchange
where
Daniel
suggests
dry
running
these
hoes.
Oh,
it's
going
to
be
added
soon,
so
awesome.
B
F
E
And
I
mean
the
other,
you
could
you
know
every
you
could
you
could
basically
dry
run
every
single
time,
there's
a
change
to
a
template
and
the
overrides
you
could
dry
run
the
heck
out
of
that.
Oh
look.
You
just
changed
something
that
can
potentially
break
and
and
surface
that
super
early
rather
than
at
the
time
of
propagation,
where
it's
harder
to
give
that
feedback.
Does
it's
asynchronous.
A
E
I
think
that's
less
likely
essentially
be
dry
running,
it's
the
base
version
and
when
there
is
skew
with
newer
versions
that
are
members
of
the
Federation,
it
would
effectively
be
a
break
like
a
bug
in
backwards
compatibility.
So
when,
when
that
was
discovered,
it
would
be
like.
Oh,
we
need
to
go
fix
that
okay,
yeah,
okay,.
A
D
See
how
I
see
it
is
if
we
are
doing
it
with
respect
to
or
if
you
have
to
do
or
the
problem
is
only
for
the
same
condition
then
just
duplicating
the
same
validation
code
inside
Federation.
Api
server
is
fine,
but
keeping
it
outside
is
something
like
the
same
validation
cells
or
are
the
same
piece
of
four
can
actually
be
used
by
different
consumers,
for
example,
Service,
Catalog
or
CR
DS
or
all
admission.
That's
how
see
ID
c--
advantage
of
keeping
it
separate
right,
I.
E
A
E
Be
possible
to
actually
propagate
a
type
that
there's
no
code
for,
and
all
you
have
to
do
is
create
the
right,
CRD
and
register
it
in
the
right
way
and
bang.
You
basically
have
de
facto
like,
like
the
equivalent
of
an
adapter
in
today's
Federation
v1
and
fin
or
I
mean
basically
just
wouldn't
mean
that
anything
and
that's
only
possible
if
we
don't
require
the
validation
code
to
be
in
the
tree.
E
D
Sorry
I
was
acting
muted
I.
We
have
six
more
minutes.
Do
we
have
something
else
Pacific
to
talk
about?
Oh
I,
remember
one
thing:
I
saw
the
code
today
afternoon.
I
was
looking
at
that
north
cold
and
I
saw
that
all
the
API
types
that
ablates
a
name
there's
not
just
the
templates.
All
the
other
types
are
also
named
as
federated,
something
like
penetrating
critical
said.
D
When
did
you
see
creating
all
that
I
thought
we
had
a
consensus
that,
because
the
group
is
named
condition,
so
the
overall
group
version
resource
would
be
unique,
even
if
you
define
it
as
CID
or
something
so
we
will
be
naming
them
as
exact
what
they
are
rather
than
prefixing
of
the
penis
Federation.
Some
things
changed
on.
My
understanding
is
I've.
E
Done
waffling
on
on
name
change,
because
I
agree
with
you
that
the
Federated
prefix
is
kind
of
redundant
the
one
cat
to
like
there
needs
to
be
different
if
wrench
iation
because
will
be
aggregated
to
kubernetes
api
that
exposes
a
replica
set.
So
even
though,
there's
a
group
sort
of
a
namespace,
we
can't
name
the
Federated
Federation
resource
replica
set
we'd
have
to
be
like
replica
set
template
which
I
am
all
for,
with
the
caveat
like
my
thinking
is.
E
If,
if
we
were
to,
if
we
were
to
fix
on
on
using
these
primitives
long
term,
then
template
placement
override
or
substitution
or
variation,
or
whatever
you
know
term,
without
specific
concept,
but
those
three
resources
that
we're
going
to
just
have
those
be
separate
indefinitely.
Then
calling
it
replica
set
template
replicas
of
placement.
The
replica
set
thing
makes
sense
to
me.
The
other
side
of
the
equation,
though,
is
if,
if
at
some
point
we're
going
to
fold
in
say
any
one
of
those
things
so
there's
no
longer
the
distinct
concept
of
template.
E
You
know
variation
and
and
placement
then,
having
like
a
federated
replica
set
like
have
you
need
some
name
and
I
guess
if
template
could
be
an
acceptable
name,
that's
fine!
Even
if
we
fold
it
in
other
concepts.
I
just
didn't
want
to
preclude
like
oh
yeah,
we're
calling
this
the
the
federated
replica
set
and
at
some
point
you
know
it's
going
to
change
and
it's
going
to
become
more
than
just
the
template.
Then
the
name
fits
I'm,
confusing
things.
Does
that
make
sense.
D
E
D
E
B
So
we
ran
into
this
when
there's
there's
a
resource
in
Service
Catalog,
that's
called
it's
now
called
service
finding
that
was
originally
called
binding
and
there
is
a
there's,
a
core
resource
called
binding.
That
is
the
also
call
binding
that's
used
by
the
scheduler
to
assign
a
pod
to
the
to
a
node.
It's
basically
like
something
that
predates
the
existence
of
sub
resources.
So
it's
like
a
little
tiny
little
tiny
like
first-class
resource,
and
when
you
have
two
resources
with
the
same
short
name,
you
can't
use
either
of
them
correctly
via
queue,
control.
Well,.
B
E
Continue
this
discussion
I
just
I'd
like
at
another
time,
I
just
I,
don't
think
it
really
matters
what
we
call
it
in
the
near
term.
I
do
think
at
some
point:
we'll
want
to
sort
of
make
the
UX
a
little
bit
simpler
and
having
federated
everything
everywhere.
It
makes
it
harder.
My
preference
would
be
like
federated
prefix
for
everything
or
federated
prefix
for
nothing.
But
that's
just
me,
but
let's,
let's
talk
offline,
sure.