►
From YouTube: Kubernetes Federation WG sync 20180530
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
B
B
What
did
we
decide
is
that
will
update
whatever
items
we
think
ideally
should
be
part
of
this
alpha
release
as
issues
and
then
we
can
reconcile
in
the
sink
today
same
so
I
did
a
bit
a
set
of
issues.
All
of
them
might
need
me,
I
mean
not
necessarily
be
considered
for
alpha,
but
that's
that's.
What
we
are
intending
I
mean
that's
what
our
initial
planning
or
intention
was
that
people,
so
we
can
talk
about
them.
I
have
I
mean
no.
We
can
talk
about
them
in
in
a
bit
before
that.
B
There
were
couple
of
other
pointers
like
last
week
or
a
week
prior
to
that
also
I
did
mention
about
review
and
most
timelines.
So
earlier
we
did
actually
segregate
this
work.
Ownership
as
Red
Hat
would
focus
on
the
low
level
implementation
and
the
three
basic
types
and
the
reconcile
a
pushy
cancel
associated
with
that,
and
we,
as
in
Bobby,
would
put
their
efforts
and
the
higher
level
implementations.
So,
honestly,
speaking,
I
found
myself
a
little
constrained,
given
there
is
no
ownership
for
all.
B
D
Think
sort
of
one
of
my
on
the
revolts
for
alpha
is
to
kind
of
make
sure
things
are
separated,
so
I'd
like
to
see
separate
binaries
for
push
reconcile
order
or
DNS
controller
or
scheduling,
and
the
idea
would
be
is
if
we
had
sort
of
separate
binaries.
If
we
had
sort
of
population
distinct
entities,
then
I
would
be
far
more
comfortable,
not
providing
oversight
like
if
you
need
oversight
great.
But
if
you
just
want
to
work
on
something
and
merge,
it
I
think
that's
fine.
D
So
long
as
it
is
a
clear
line
of
responsibility,
I
mean
we're
currently
maintaining
like
because
he's
had
push
reconciliation
mechanism,
and
so
the
the
goal
would
be
in
the
near
term,
at
least
while
you
know
we're
working
on
things
that
we
have
sort
of
a
business
interest
in
we're
responsible
for
maintaining
that.
It's
not
that
we
don't
coordinate
and
make
sure
that.
B
Yeah
yeah
I
think
that
we
might
not
have
very
stringent
coding
more
the
view,
guidelines
for
alpha
release,
at
least
and
like
III,
or
she
also
did
not
actually
look
into
many
of
the
PRS
that
you
guys
earlier
worked
on.
So
that's
like
because
we
were
trying
to
focus
on
what
we
were
doing
so
once
we
have
something
workable
in
place
in
alpha.
B
After
that,
we
can
probably
define
a
proper
guideline
for
cross
views
and
merging
of
peers,
but
until
then
I
guess
it
should
ideally
be
okay,
that
we
have
a
functional
code
and
we
are
able
to
have
something
working
out
there,
maybe
in
pieces,
not
necessarily
as
one
individual.
You
know
the
fixed
effects,
as
you
mentioned,
yeah.
D
I
mean
if
things
are
broken
apart,
I
think
it
sort
of
for
some
reason,
I
hadn't
really
realized,
but
I
mean
I.
Think
one
of
the
barriers
to
experimentation
is
if
everything
has
to
be
distributed.
You
know
the
one
monolithic
you
know
binary
or
one
image,
but
if
we
just
consider
like
okay,
we're
gonna
have
some
common
API
constructs
and
then
what
you
know
controllers
the
controllers.
You
know.
D
Ultimately,
maybe
the
controllers
can
even
live
in
typical
repos
like
to
me
it's
something
we
don't
want
to
have
a
cohesive
way
of
bringing
things
together
and
saying
this
is
Federation,
but
in
terms
of
like
the
development
strategy,
I
think
you
know
having
interested
parties
be
able
to
work
independently.
I
think
that's
a
win,
not
that
we
won't
have
things
that
we
want
to
share
responsibility
for,
but
I
don't
I.
Don't
know
that
it's
everything
you
mean
that
kind
of
I,
think
that
reflects
the
spirit
of
to
the
kubernetes
development
model,
how
its
evolving
anyway.
D
You
know
I
think
we're
more
interested
in
the
DNS
thing,
but
with
scheduling
I,
don't
think
we're
really
at
a
point
where
we
know
we
had
good
news
cases
to
be
able
to
support
like
providing
oversight,
and
so
in
that
case,
so
long
as
it's
something
separate
and
optional
like
you
can
run
it
or
you
don't
have
to
run.
It
I'd
be
happy
to
see
you
merging
stuff
when
you
think
it's
ready
when
it
meets
yearnings.
D
I
mean
I
think
in
the
near
term,
it
it's
just
a
matter
of
ownership,
of
whatever
components
so
I
mean
RedHat,
we
kind
of
own
the
push
reconciler
part.
We
probably
you
know.
We'd
have
shared
responsibility
for
the
api's
and
that's
kind
of
a
common
point
of
interaction.
I
would
expect
that
we
probably
share
responsibility
for
I,
mean
you're
doing
a
lot
of
the
work,
but
we're
definitely
God.
That
is
a
deliverable.
D
A
Clean
and
the
question
the
key
people
from
Red
Hat
and
far
away,
and
whoever
else
is
actively
working
on
the
stuff
we
need.
You
know
a
few
of
those
people,
and
so
if
we
can
just
add
trashy
and
earphone,
there
I
mean
clearly
they're,
not
gonna,
go
and
merge
rubbish
and
they're,
not
gonna.
You
know
block
work
that
you
guys
are
doing,
but
just
from
a
mechanical
point
of
view,
we
need
to
do
that
and
then
I
think
later
we
can
decide
if
we
want
to
have
a
repo.
A
Just
the
you
know,
the
API
in
it
or
portions
of
the
API
and
then
separate
repos
for
each
controller.
I
would
personally
prefer
us
not
to
take
on
that
burden
right
now.
It's
just
the
he
work
and
there's
quite
a
lot
of
work
around
getting
all
of
those
Reapers.
You
know
with
CI
testing
and
makes
deployment
slightly
more
difficult,
but
we
can
certainly
do
that
down
the
road
I.
Don't
think
it
adds
a
huge
amount
of
value,
more
I.
D
D
A
Sense,
cool
I
had
a
quick
question
about
the
push
reconciler
if
we
finished
with
the
previous
topic,
I've
had
some
external
companies
coming
to
me
found
in
particular,
who
interested
in
a
sort
of
a
very
basic
reusable
push
reconciler
which
doesn't
do
template
substitution
or
anything
like
that.
I
understand
that
right
now
the
template,
substitution
and
the
propagation
are
kind
of
the
same
piece
of
code
and
the
same
binary.
D
D
A
So
I
think
their
requirement
is
that
they
have
a
controller
which
puts
also
to
logic
in
place
and
just
wants
to
spit
out
fundamental
kubernetes
resources
to
be
propagated
to
the
underlying
clusters.
Reliably
and
I.
Don't
think
they
can
do
that
today,
without
generating
templates
and
getting
the
existing
substitution
logic,
which
may
or
may
not
be
adequate
for
them
to
do
the
substitution
and
propagation
together.
So
that's
our
understanding
bit
requirement
and
yeah.
It's
not
a
big
deal.
D
A
And
so
I
think
that's
what
they
want,
whether
they
and
I
think
they
want
that,
because
they
don't
want
to
rely
on
whatever
template
substitution.
We
do
for
some
reason
and
I.
Don't
fully
understand
that
if
it,
if
it's
the
case
that
they
can
actually
use
the
existing
substitution,
then
there
doesn't
seem
to
be
an
urgent
need
for
separating
it
out
if,
for
some
reason
they
want
to
generate
their
own
things,
then
then
we
would
need
separate
about
okay,
I
mean.
D
I
think
that
that,
hopefully
I
guess
for
me
the
what
you're
talking
about
raises-
and
there
are
questions
and
answers
for
me
and
I
guess
I-
would
hope
that
there
would
be
more
detail,
I'm,
certainly
available
to
breaking
things
apart.
So
you
know,
reuse
is
possible.
It's
not
clear
to
me
that
their
use
case
is
really
in
line
with
what
the
closure
compiler
is
intended
to
do
not
judge
the
substitution,
but
just
the
way
that
it
functions
is,
is
very
specific.
Maybe
too
specific,
but
I
need
to
know
more
about
your
use
case.
I.
A
A
So
when
you
register
a
cluster,
maybe-
and-
and
this
sounds
more
like
cluster
API
and
I'm
not
going
to
comment
on
whether
that's
a
good
idea,
but
but
you
can
imagine
a
situation
where
you
have
a
high
level
object
like
a
replica
set
or
something
which
needs
replicas
on
each
continent
would
say,
and
there
are
no
clusters
on
a
given
continent
and
it
could
build
a
cluster
in
order
to
deploy
the
replica
set.
There
doesn't
sound
like
a
completely
crazy
idea,
exactly
how
they
would
do.
A
D
The
idea
that
there's
something
as
simple
as
I
just
want
to
propagate
the
clusters,
like
I,
guess
for
thinking
that
we've
done
around
like
these
primitives,
like
you
kind
of
want,
you
know
some,
you
can
call
it.
We
call
it
a
template
here.
I
mean
previously
in
Federation,
v1
I
was
just
a
standard
kubernetes
resource,
but
I
I
mean,
unless
they're
wanting
to
go
back
to
that
model
that
you
kind
of
need
some
sort
of
resource
that
identifies.
D
A
Not
annotations,
not
simple,
kubernetes
resources,
but
things
like
distributed
databases,
so
they
would
want
a
higher
level
CRD,
which
was,
for
example,
a
distributed,
cockroach
DB
database,
and
then
they
would
want
to
decide
where,
to
put
it
based
on
a
whole
bunch
of
cockroach,
DB,
specific
logic
and
where
to
put
all
the
data
and
the
shards
and
whatever
else
and
then
generates.
You
know
underlying
kubernetes
objects
and
alright
clusters,
but
they
don't
have
to
do
all
the
propagation
in
Salton,
Sea,
I,
guess.
D
My
hope
would
be:
is
that
they'd
be
able
to
generate
a
template
emplacement
for
the
creation
of
the
resources,
not
clear
that
I
really
saw
their
entire
use
case,
but
in
terms
of
distributing
configuration
and
multiple
clusters,
that
kind
of
what
push
reconciler
or
theoretical
reconcile
I
understand.
I.
A
D
E
So
what
we're
looking
for
in
our
kind
of
high-level
is
that
ability
to
deploy
customer
service
definition
name
and,
if
you're,
a
plane
which
will
hold
and
get
responsibility
of
propagating
things
without
necessarily
using
templatized
version
of
collaboration
with
you
and
Scruton
also
mentioned
earlier
about
separating
cluster
edges.
Potentially,
who
would
like
to
have
functionality
not
necessarily
template
eyes
in
which
class
they
should
go
into,
but
rather
than
have
a
failure,
domains
for
clusters
who
can
identify
clusters
by
provider
and
region,
let's
say
and
just
define
kind
of
affinity
or
fill
it
amines
for
those.
E
So
what
we
look
at
is
possibility
to
need
propagate
resources
to
target
clusters.
What
is
customer
service
definition
so
otherwise,
but
not
necessarily
leverage
and
current
simple
template
approach,
and
that
was
multi.
Cluster
controller,
for
example,
could
take
responsibility
of
creating
or
generating
those
target
definition
resources
without
necessarily
using
intents.
D
C
D
E
I
think
we
were
looking
something
under
umbrella
Federation
because
we
definitely
wanna
be
leveraged
in
some
of
the
Federation
components
like
the
secondary
deployments
and
secrets.
But
other
one
goes
in
the
realm
of
stare
knees.
Then
we
probably
gonna
get
more
involve
hands
on
propagation.
Maybe
nothing
said
illusion:
templates,
yeah,.
A
Maru,
how
do
you
just
out
of
interest?
How
do
you
propose
or
see
the
full
propagate
of
working
in
I
mean
my
understanding
would
be
that
the
pool
propagator
would
be
able
to
say
what
should
I
have
in
my
cluster
and
pull
it
into
the
cluster
and
not
be
responsible
for
doing
any
of
the
template
substitution,
for
example,
and
it's.
A
C
D
D
D
C
D
Reconciliation
mechanism
yet
or
a
Hamel
generator,
but
that's
entirely
like
that's
what
I'm,
assuming
you'll
need
to
be
done
at
some
point
for
any
users,
for
whom
a
single
point
of
failure
is
unacceptable.
We
don't
I,
don't
I,
don't
know
that.
There's
a
better
solution
than
pushing
configuration
to
storage
storage,
medium
that
can
be
replicated
tribulus.
D
D
C
D
A
D
D
But
what
I
guess
my
suggestion
is
it's
much
easier
to
like
synchronize
data
at
that
level
versus
coordinating
API
being
like
super
done.
At
least
I
could
be
maybe
I'm
just
misunderstanding,
but
I
haven't
seen
a
good
way
of
ensuring
uptime
of
a
single
kubernetes
cluster
across
regions,
whereas
distributing.
C
D
D
D
B
D
Not
really
it's
not
really
a
generic
controller
configuration
mechanism,
it's
very
specifically
like
time
to
push
reconciliation,
so
I
mean
if
you
look
at
the
API
type
definition.
That's
what's
my
template.
What's
my
placement,
what's
my
override,
if
I
have
override
with
the
target
type,
I
mean
I'm,
not
saying
you
couldn't
create
something
generic
but
I'm,
not
sure
the
value.
B
D
D
B
B
What
I
was
considering
it
as
a
user's
perspective
like
how
a
user
would
bond
so
I
actually
created
one
issue
also,
so
my
intention
of
creating
that
issue
was
that
user,
who
deploys
a
control
plane
might
want
a
set
of
binary
is
only
to
be
enable
sorry
set
of
API
is
only
to
be
enable
the
odd
set
of
related
controllers
to
those
at
yes
to
be
enabled
or
disabled
some
mechanism
of
doing
that.
That's
what
was
in
my
mind
if.
D
D
Suggesting
that
we
don't
help
I
mean
I,
think
this
is
where
the
sort
of
coordination
like
Federation
v2
is
a
coordination
between
all
these
different
components.
Well,
packaged
things
will
provide
like
user
guides
and
insulation
health,
and
that
sort
of
thing
I
would
expect
that
it
would.
You
know
we
would
make
it
easy
to
deploy
these
things
in
a
standard
way,
but
then
we
wouldn't
preclude
doing
arbitrary
things.
The
kubernetes
allows,
and
rather
than
having
to
you,
know,
define
a
configuration
mechanism
but
says,
though
I
want
to
run
this
controller.
D
I
want
all
right,
well,
run
it
with
these
configurations
and
have
that
all
in
one
big
binary,
every
single
controller
can
define
how
it
exposes
configuration
independently.
It
doesn't
have
to
be
considered
in
the
context
of
the
whole
thing.
I
just
think
about,
like,
like
kubernetes
controller
manager
is
huge
and
wanna
lithic
and
like
really
complicated.
D
D
B
Be
super
easy
if
we,
if
we
just
have
different,
generate
different
reports
where
these
tools,
like
a
case
of
a
boot
or
the
pew
builder,
provided
all
this
mechanism
the
auto
generate
handles
for
you
create
the
container
for
you
that
kind
of
stuff.
If
we
are,
if
you
have
managing
everything
you
know,
single
support
might
be
slightly
the
people,
that's
what
I
mean.
D
B
B
C
D
D
B
Yeah
III
don't
deny
that
it's
actually
quite
useful,
and
maybe
that's
that's
the
path
we
ideally
should
take
in
future.
What
I
was
moving
at
is
then,
would
there
would
be
some
offered
which
will
be
required
currently,
if
you
use
either
of
these,
even
if
you
use
API
server
builder
for
generating
poured
and
generating
the
scaffolding
and
whatever
way
we
are
deploying,
we
were
relying
on
that
until
now,
right
and
even
if
we
move
to
queue
builder.
In
that
case
also
there
is
some
automated
stuff.
B
There
are
say
some
three
or
four
automated
steps
and
we'll
just
give
you
the
API
structure
and
I
am
and
probably
a
container
for
your
controller.
I,
don't
know
or
maybe
I
have
missed
something
I
don't
know
if
you
have
to
generate,
say
three
containers
for
three
different
controllers
in
the
same
repo
where
you
have
initiated
your
domain.
If
it
is
super
easy
or
it
is
easy
at
all,
or
we
will
have
to
override
a
lot
of
stuff
and
that's
what
is
I,
don't
deny
that
we
should
move
there.
B
D
I
know
I
think
your
concern
is
well-founded,
I
think
there's
a
certain
amount
of
risk,
I,
don't
think
any
of
us
are
terribly
familiar
with
queue
builder
and
so
creating
multiple
binary.
That's
to
me.
That
would
be
something
to
investigate
and
see
if
it's
a
reasonable
thing
to
do,
or
if
it's
gonna
be
too
time-consuming,
I
mean
we're
using
API
server
builder
I
think
was
kind
of
been
useful.
D
We've
got
a
boot
strapped
and
working
without
having
to
worry
about
all
of
the
infrastructure
like
you,
Randy's
requires,
for
example,
I
like
as
soon
as
we
move
away
from
aggregation.
Its
value
becomes
a
little
a
lot
clearer
to
me
because,
like
we're,
not
you
know
helping
simple
controllers
like
we
can't
just
use
the
generators
they
give
up
in
those
cases,
I
think
the
main
utility
once
we
move
to
see
are
these
would
be
quiet
generations
I'm,
not
sure
we
have
to
apply
ourselves
to
a
framework.
D
We
might
be
able
to
crib
pieces
of
it
and
reuse
the
build
infrastructure
without
having
to
tie
ourselves
to
I
put
ourselves
within
the
box
that
it
puts
us
in
crafting.
But
anyway,
sorry
to
your
point,
though,
that
that's
something
we
need
to
be
rescues,
see
if
it's
possible
to
what
it
would
take
to
generate
multiple
binary
and
is
there
we're
there
other
concerns?
You
have
it
to
me.
It's
like,
what's
whispies
outlook,
you
know,
make
them
issues
and
make
sure
that
we
cover
them.
Yeah.
B
B
D
D
B
Last
time,
I
did
raise
the
concern,
but
I
mean
it
was
like
everybody
saw
and
cling
to
it,
except
Dario
raised
some
concerns
and
I'm
not
sure.
What's
is
exact
use
case,
which
might
need
necessarily
CID
base
infrastructure
to
mean
actually
both
of
these
are
sort
of
differences
in
infrastructure.
I
spent
some
time
generating
the
similar
API
tied
for
kind
of
whatever,
using
both
cube,
build
earlier,
and
it
gave
server
builder.
B
We
already
know
that
the
generation
and
the
steps
to
use
and
that
stuff
is
not
greatly
different
and,
in
fact
the
API
8
that
would
be
exposed
after
that.
That
will
also
remain
same.
The
controller
logic
also
grows
remain
same.
The
difference
is
only
how
the
API
is
are
registered
with
the
API
server.
So,
in
our
case,
the
API
server
bill
does
cases
accurate,
API
server
is
run
and
it
is
registered
by
that
aggregator
in
the
case
of
CRE
sits
directly
registered
data
API
server
using
the
CR
de
extension
yeah.
C
My
point
is
that
my
point
is
that
in
using
Shirdi,
you'll
be
able
to
use
a
normal
to
one
of
these
kubernetes
cluster
and
just
promoted
to
a
controller
federated
control.
Plane,
it's
going
to
be
pretty
rightful,
and
probably
the
other
difference
is
the
way
is
the
definition
of
all
phones
ever
time
to
see
an
alpha
soon
we
called
it
is
alpha.
The
women
will
expect
a
beta
and
my
mind
since
the
technology
is
pretty
different.
C
C
B
B
That's
gonna,
that's
that's
accepted,
but
I
think
we
had
had
this
conversation
a
couple
of
months.
I
guess.
So
there
is
a
clear,
clear.
You
know
guideline.
There
is
a
in
docks
in
theaters
docks.
Also
there
is
a
clear
guideline
about
then
you
might
want
to
use
CRTs
or
when
you
might
want
to
use
the
education,
and
one
of
the
pressing
reasons
of
starting
with
aggregation
was
CR
DS
at
the
point
of
time,
for
not
that
mature
and
I
think
one
of
the
limitations
was
the
how
the
versioning
of
API
is
used
in
CRT.
C
B
C
B
I
I
still
think
that
aggregation
sort
of
suits
more
of
our
use
case
because
we
basically
need
to
have
a
clear
part
of
evolution
of
API
is
from
C
alpha
to
beta
or
whatever
I,
don't
know,
really
how
exactly
that's
handled
in
see
a
DS.
But
if
that's
more
useful,
there
are
some
other
reasons
of
using
that.
I
would
certainly
be
open
to
that.
D
C
D
Nobody
in
their
right
mind
is
going
to
be
using
this
in
production
at
an
alpha
level
and
the
migrated
data
questionable
my
mind
that
we're
gonna
have
any
backwards,
incompatible
changes
between
data
and
b1,
and
so
to
me,
it's
like
we
I
would
expect
that
we
are
going
to
get
versioning
and
see
our
knees
eventually,
because
quite
user
extension
I
think
it's
required,
but
I
think
for
our
use
cases
don't
think
it's
necessary
so
putting
it
as
a
blocker
like.
Oh,
we
can't
use
here
these
because
we
need
this
feature.
D
It
doesn't
preclude
getting
versioning
I'm,
not
trying
to
say
Oh,
we'll,
never
need
versioning,
but,
to
my
mind
like
up
to
v1,
it's
questionable
to
me
whether
they
were
actually
willing
to
versioning
before
we
go.
V1
and
I
would
I
would
think
that
we
would
want
versioning
more.
We
did
it.
Something
was
radically
different
to
be
to.
D
C
D
C
B
I
also
wouldn't
think
that,
like
at
any
given
point
of
time,
the
latest
is
what
is
what
you
consider
as
the
state
of
the
code.
It
may
be
hell
for
beta
or
whatever,
and
there
is
no
parallel
API
to
that.
I
I
agree
to
that
so
condition
as
of
now
I
think
is
not
that
complex
that
you
need
to
have
like
older
versions
also,
but
this
used
in
the
API
path
itself.
I
gives
I,
mean
us
or
makes
me
question
this
fact.
B
For
example,
even
if
we
we
have
initiated
that
API
right
now,
we
always
say
we've
an
alpha
man
or
something
in
the
API
path.
Right,
even
in
the
case
of
CID,
is
also
we
specify
that,
so
that
is
where
I
I
mean
I
I
find
it
reasonable
to
question
this
particular
aspect.
So
if
that's
not
there
in
the
path
at
all,
for
example,
we
always
say
that
say:
Federation,
okay,
dot
cater
start,
io
/
directly,
it
could
be
federated
api
is
something
made
no
motion
at
all
in
the
pain
right.
B
It
sounds
funny
to
me
if
we
I
mean
if
we
are
not
going
to
use
version
at
all,
even
in
the
case
of
CID
like
I,
don't
understand
that
concept
at
all.
Right
now,
when
you
initiate
API
in
CID,
you
provide
all
the
GVK
that
group
version
and
the
resource
name.
So
version
is
also
there,
but
III
don't
really
understand.
Why
does
that
documentation
say
that
versioning
beyond
that
is
not
possible.
So,
for
example,
if
we
say
do
an
alpha.
D
Like
when
you
define
us
here,
the
like
nature's
here
DS
is
it's
an
API
resource.
An
API
resource
is
just
keyed
by
name
like
the
way
I.
Have
it
set
it
up.
Another
thing
they
couldn't
do
it
differently,
but
today,
if
I
created
an
API
or
sorry
I
see
Rd
definition
or
C
or
D,
it
is
there's
a
name
and
it's
cluster
wide,
and
so
in
order
to
support
versioning,
you
would
have
to
embed
the
version
in
the
name
and
they
they
haven't
done
that.
So,
if
you
look
at
the
validation,
it's.
C
D
It
must
be
like
my
group
name
and
my
resource
name,
not
the
version
I,
don't
know
why
they
did
that,
there's
nothing
preventing
them
from
putting
a
version
and
then
like.
If
you
do
try
to
do
it
today,
it
just
wouldn't
validate
like
it
makes
sure
that
the
the
group
name
and
resource
name
specify
is.
You
know
it
forms
the
name
of
the
resource.
D
B
Yep,
honestly
speaking,
I'm
fine
with
going
either
way
it's
like
if
you
continue
and
do
or
release
this
alpha,
as
I
mentioned
in
the
reply
to
that
email.
Also,
I
am
fine
and
if
we
can
realistically
migrate
to
see
IDs
knowing
this
is
an
invitation,
I
I
mean
I
know
that
there
is
no
pressing
reason
to
have
real
parallel.
Api
is
like
one
API,
which
is
something
like
Federation.
We
won
and
then
an
older
API,
which
is
foundation
existing
at
the
same
time.
I
don't
see
any
of
that
as
a.
B
E
B
B
E
I
tried
recently
and
so
far
it
appears
to
be
limited
to
just
schema
validation.
So
if
you
want
to
have
a
little
bit
more
involved
validation,
it
will
be
challenging
specifically
also
for
resources
which
have
pretty
involved
schema.
The
see
redefinition
can
get
there
and
not
really
very
quickly.
Yeah.
E
Right
yeah,
another
aspect
is
defaulting
because,
if
effectively,
what
they
suggest
is
that
you
can
do
more
extensive
validation
on
the
controller
side,
which
I
find
it
a
little
bit
awkward,
because
you
basically
already
accepted
the
object
in
API
and
then
controller
has
to
reject
it
or
fail
it.
You
just
kind
of
I,
don't
find
it
comfortable
that
mean
yeah.
C
D
So
I'm
I
mean
there's
nothing
preventing
anyone
from
creating
API
is
in
whatever
way
they
want
I,
just
think
for
Federation
the
limitations
with
defaulting
invalidation
or
not
really
or
in
another
case.
The
main
problem
for
federation
is
around
templates
because
they
in
bad
kubernetes
types,
which
are
generally
very
complicated
and
would
require
complicated
validation
today,
using
API
server
builder,
all
we're
getting
is
basically
a
type
check.
D
So
if
the
field
is
supposed
to
have
like
one
of
three
constant
string
values
currently
that
will
be
checked
but
in
terms
of
defaulting
in
terms
of
doing
algorithmic
validation,
the
dharia
mentions
there
is
no
provision
for
that.
So
by
moving
to
CR
DS
to
me,
we
would
either
want
to
make
sure
that
we
can
ensure
the
type
safety
we
have
today.
If
that's
important
I
mean
we
could
always
just
punt
on
it.
D
For
now
the
long
run
answer
is
we're
probably
not
going
to
do
default
thing
and
the
way
that
the
propagator,
the
push
reconciler
works
today
way.
Compares
versions
allows
for
defaulting
not
being
as
long
as
you
know
when
it
does
a
push
to
or
like
it
when
it
reconciles.
With
a
member
cluster
it'll
record
the
versions
of
that
reconciliation
effort,
it
was
successful
as
long
as
it
doesn't
change.
No
it
doesn't.
It
doesn't
have
to
do
anything
and
so
that
kind
of
covers
default.
D
It
in
terms
of
more
complicated
validation,
like
algorithmic
validation,
I,
think
our
real.
Our
only
answer
is
that
we
need
to
wait
for
Jireh.
So
that's
the
capability
that
was
listed
on
the
issue
that
you
pointed
at
Daario
involving
like
how
do
you
have
got
a
core
type
in
a
C
or
D
I?
Don't
think
we
have
any
better
answer
like
if
somebody
requires
Congress
validation,
as
you
say,
you
can
do
like
they're,
you
talking
about
an
admission
controller,
I.
Think
like
fish,
I,
think
in
the
CRT
space
they
just
call
it
a
web
hook.
D
It's
kind
of
the
same
thing,
I'm,
just
making
sure
I
got
terminology
right
so
so,
to
my
mind,
like
moving
forward
with
CR,
these
would
require
the
whether
we
we
care
about
the
type
safety
or
whether
you
know
it's,
nothing.
The
errors
wouldn't
be
caught.
It's
just.
We
want
to
catch
them,
spend
the
effort
to
catch
them
in
Federation
API,
or
do
we
want
to
just
leave
them
to
be
caught
when
they
try
to
be
propagated?
The
underlying
clusters.
B
B
D
B
D
B
For
example,
I
have
used
some
stuff
in
scheduling,
say
there
is
some
field,
for
example,
newsdesk
go
to
that
because
which
is
which
is
target
for
the
disposition
so
which
I
have
put
as
something
like
one
and
validation
should
ideally
be
that
it
should
not
be
something
lesser
than
that
as
an
example.
It's
this
I
have
a
case
for
that.
So
I
imagined
that.
D
C
D
I
don't
think
web
hooks
are
like
particularly
hard
thing,
but
they
do
involve
like
configuring,
TLS
deploying
you
know,
something
can
be
called
like
what
I'd
like
is
to
be
able
to
quantify
the
cost
of
that
and
be
able
to
weigh
it
again.
The
operational
cost
of
either
of
running
an
API
server
by
Plan
B.
B
Okay,
so
yeah,
if
there
are
alternatives
for
these
kind
of
things,
C
versioning
I,
don't
consider
a
blocker
at
all,
because,
as
you
mentioned,
that
at
any
given
point
of
time,
Federation
could
be
considered
in
a
particular
stage
either.
It
is
like
alpha
beta
and
that's
the
latest.
That's
what
is
a
bit
but
defaulting
and
validation
to
me
is
sort
of
a
drawback.
B
It
might
not
be
a
blocker,
there
might
be,
we
might
be
able
to
figure
out
some
workaround
or
no
workaround
would
mean
that
this
tech
would
have
to
be
in
controller
and
then,
after
the
epa
is
created,
we
have
to
update
state
and
stephon's
I,
don't
know
how
that
would
be
handled,
but
it
obviously
is
a
drawback.
So
we
might
need
to
investigate
this
ya.
D
A
D
B
There
are
only
two
main
headings
under
which
CID
versus
aggregation
can
be
decided,
which
is
defaulting
and
I
mean
we
can
highlight
that
as
validation
and
defaulting
and
dispersion
versioning,
we
have
we,
we
don't
necessarily
consider
that
as
an
advantage
or
disadvantage.
We
need
to
validate
validation
in
the
content.
Yeah.
D
I
would
say
that
the
when
I
consider
a
world
where,
like
other
other
parties,
can
create
controllers
and
create
api
types,
I
would
expect
that
they
would
probably
not
want
to
use
aggregation
as
they
could
avoid
it.
So
I'm
not
I'm
not
just
like.
We
need
to
investigate
I'm
not
suggesting
otherwise,
but
I'm.
Thinking
like
oh
I,
want
to
create
some.
You
know
new
capability
and
I
have
the
models
for
doing
it
with
charities,
which
makes
my
life
easy.
Like
I
have
examples.
The
webhook
I
think
the
challenge
today
is
we
don't
really
have.
C
D
Yes,
on
the
other
hand,
we
had
you
know
examples
in
code:
here's
how
you
do
with
C
or
D.
Here's
how
you
write
for
your
web
hook,
let's
provide
validation
and
or
defaulting,
then
I
think
it
becomes
a
lot
simpler
like
not
really
today.
The
reason
validation
is
easy,
because
we
have
lots
of
examples
of
the
Dinkelman
think
we
had
example.