►
From YouTube: Kubernetes Federation WG sync 20180411
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
B
Yeah
I
I
did
not
get
time
to
I,
mean
think
about
what
identity
should
be
discussing
today.
So
I
don't
have
a
set
agenda
or
a
specific
point
for
today's
discussion
in
last
work:
groups:
Inc
I,
guess
we
had
some
amount
of
discussion
on
what
we
want
to
present
or
be
talking
about
at
UConn,
which
I
think
is
getting
resolved
as
part
of
the
slide
deck
Christian
presented,
talk
Christian
has
initiated.
So
apart
from
that,
is
there
anything
else
that
we
have
online
to
talk
about
model?
D
B
C
D
D
B
So
actually,
we
could
have
done
that
time
you
take
in
that
exercise
and
it
plays
this
time.
I
have
one
point,
though,
which
actually,
as
part
of
the
PR,
that
discussion
has
happened,
but
I
see
that
because
Shashi
also
faced
a
similar
issue
that
is
about
grouping
of
different
API
is
in
V
to
the
current
place.
I
somehow
feel
that
it
might
be
good
time
that
maybe
we
move
with
the
grouping
else
like
we
discussed,
initiate
the
project
or
reinitiate
the
project,
with
tradition,
dot,
K
test.
What
is
a
domain
and.
B
Have
better
naming
of
the
groups
that
we
are
defining
like
I
initiated
a
group
and
I
named
the
federated
scheduling
and
Shashi
now
initiating
our
status,
and
he
probably
will
have
to
name
it
something
like
federated
status
or
something
like
that
to
not
to
interfere
with
others
that
this?
That
already
are
there
I.
D
Mean
if
we
were
to
continue
to
define
API
types
via
generation
and
Kali
makes
sense
to
reset
it,
as
we
say,
I
mean
kind
of
leaning
towards
just
thinking
of
a
C
or
D
future,
in
which
case
the
group
names
are
a
little
bit
less
important
and
you
still
need
them.
But
you
don't
have
this
dependency
on
the
project
having
a
certain
yeah,
it's
domain
I
think
you
could
be
a
lot
more
flexible,
but
it's
one
of
people
think
about
the
idea
of
kind
of
targeting
near-term
switch
to
C
or
D
I.
B
D
It's
true,
but
I
mean
that's
been
something
that
that
we've
mentioned
the
past
is
kind
of
a
blocker
for
adoption
of
C
or
D,
having
sort
of
been
a
little
bit
dug
a
little
bit
deeper
and
a
little
less
concerns.
Reason
being
like
supporting
multiple
versions
is
kind
of
a
nightmare
anyway.
So
really
early
only
has
the
conversion
at
a
time,
and
you
can
kind
of
provide
backwards.
Compatibility.
D
D
But
I
guess
for
me
like
like:
if
we
need
that
backwards,
compatibility
I
always
get
it
in
the
future.
If
we
were
to
avoid
defining
API
is
the
the
standard
or
the
machinery,
co
generation
right
and
just
to
see
Rd
is
that
means
we
could
potentially
avoid
having
to
impose
there
like
operational
requirement
of
an
API
server.
We
don't
have
Ignatia
and
we
have
to
separate
SCD
and
sensor.
You
just
share
it
as
Headey
instance,
and
if,
in
the
future
we
like
before
we
go
GA,
we
realize.
D
Oh,
we
actually
do
want
the
capabilities
provided
by
the
api
server.
We
can
always
switch
right.
I
mean
to
my
mind
like
like
GA,
is
kind
of
this
would
be.
The
decision
claim
like
that
plan
would
be
like
do
we
understand
this?
Well
enough
has
C
or
D,
like
probably
no
Sierra.
You
could
have
versioning
support
in
the
next
six
months
to
a
year,
in
which
case
like
so
to
me,
it's
kind
of
like
there's
a
certain
amount
of
risk
of
just
going
the
Sierra
D
path
because
we
might
have
to
switch.
C
It
was
supposed
to
be
published
last
week,
but
we
kind
of
missed
it.
Is
it
worth
talking
about
the
level
of
detail?
I
was
thinking
might
be
useful
would
be
to
identify
the
primary
API
types
that
we
were
interested
in
implementing
in
the
next
12
months
to
the
best
of
our
understanding
of
what
that
looks
like
today
and
figure
out,
you
know
who's
who's
interest
in
implementing
specific
types
in
the
next
12
months
and
rough
guidelines.
C
D
Think
maybe
reframing
be
helpful
for
me
anyway.
The
old
strategy
of
saying,
like
oh
we're,
going
out
for
going
beta
deployments,
for
example
like
to
me,
like
I've,
been
more
focused
on
building
infrastructure,
I'd,
say
for
propagating
everything,
and
so
like
initially
we'll
be
able
to
propagate
all
these
resources
and.
D
When
I
moved
to
CR
DS,
this
means
that
the
fact
that
we
don't
support
something
out
of
the
box
there's
not
necessarily
a
blocker
for
adoption,
because
supporting
like
like
when
we
talk
about
like
moving
to
C
or
D.
That
means
like
template.
Placement
override
substitution,
whatever
they
can
be
defined
by
C
or
DS,
and
as
long
as
sort
of
the
convention
is
maintained,
you'll
be
able
to
propagate
those
more
advanced
behavior
like
scheduling
I'm.
D
Assuming
that's
we
mean
by
deployments
schedule
deployments
yeah,
that's
more
like
you
know,
I
want
I,
want
support
for
all
these
things
to
propagate
them,
but
then
I
want
dynamic
scheduling
and
so
to
me,
it's
like
device
support.
Employments
are
not
isn't
really
the
question.
It's
do.
I
support
propagation
of
the
plants
level.
One
boy
supports
like
scheduling
across
clusters
of
deployments.
That's
like
level
two
sure
and.
C
Whatever
those,
whatever
those
macro
pieces
of
functionality,
are
and
you're
right,
I
mean
deployment,
might
not
be
the
right
granularity,
there
might
be
simple
deployments
and
schedule
deployments
or
something
whatever
we
call
them
do
whatever
we
go.
I
still
think
we
need
to
say
well,
we
plan
to
do
simple
ones
in
alpha
by
I.
Wasn't
trying
to
disagree
with
that?
C
I
was
just
yeah,
and
maybe
they
maybe
they
actually
come
out
at
the
same
time,
maybe
different
groups
of
working
on
a
simple
one
on
the
complex
one,
but
I
think
they
can
still
publish
dates
which
may
even
coincide
and
and
yeah
this
they
still.
This
assume
we
still
have
this
kind
of
high-level
concept
of
the
two
primary
use
cases,
the
one
being
cluster
admin
so
propagating.
D
D
C
Absolutely
and
and
for
to
guide
ourselves,
you
know,
guest
we've
collectively
agreed
that
this
is
what
we're
planning
to
do
and
and
then
we
can
measure
ourselves
against
that
everyday
quarter
or
whatever
and
say.
Oh
we're
we're
on
track
or
we're
changing
our
minds
or
whatever
the
case
may
be
I'm
just
looking
for
Christian
slide
deck
and.
B
D
D
D
D
Yeah
because
I
was
kind
of
realizing
when
I
like
I,
think
it
was
kind
of
a
gap.
In
my
knowledge
or
experience,
scheduling
of
like
deployments
or
replica
sets
always
happened
in
isolation
which,
in
a
realistic
scenario
like
maybe
that
works
for
jobs
or
other
things
that
are,
you
know
they
kind
of
batch
oriented
and
don't
necessarily
have
dependencies.
But.
D
B
B
No,
not
them
are
called
applica
placement,
preference,
references
or
replica
location
preference,
something
like
that
in
v1,
the
annotations
we
used
to
use
on
replicas
and
deployments,
which
will
define
the
bait
or
the
min/max
of
each
four
for
each
cluster,
so
that
that
I
would
be
targeting
that
that
kind
of
preferences
can
be
applied
on
a
deployment
or
a
cassette
and
in
future,
probably
can
have
in
place
of
a
target.
Ref
can
have
a
label
select
a
kind
of
a
thing
so
that
they
can
be
applied
on
multiple
deployments
together
or
multiple
with
replicas
together.
B
The
second
portion
is,
which
I
think
is
a
different
problem
or
a
parallel
problem,
which
need
not
be
solved
in
the
same.
Tight
is
placement
of
multiple
resources
or
applying
constraints
on
different
types
together,
something
like
a
policy
enforcer
or
an
admin
which
one
to
do
that
so
which
I
think
I
would
target
implementing
in
a
different
API
type
and
a
different
controller
which
can
place
either
using
a
label
selector
or
something
like
that
place.
Multiple
resources
of
different
types,
also
at
the
desired
location,
so
I
did
in
one
cluster
or
not
in
one
filter.
D
Does
just
to
read
just
to
try
to
like
get
it
back
to
you.
The
goal
here
is
to
get
like
like
a
plan
for
six
months
a
year
or
whatever
I
guess.
In
my
mind,
I
was
sort
of
questioning
the
utility
of
being
elders
to
schedule,
replica
sets
or
deployments
in
isolation
and
then
talking
about
okay.
Well,
you
could
schedule
groups
of
things
and
then
maybe
they're
related
resources.
It
seems
like
a
hard
problem.
I
guess
I
have
a
hard
time
conceptualizing
what
it
had
actually
invalidate
it,
because
I
find
you
can
schedule
things
like.
D
B
Okay,
so
when
reportes
and
resource
constraints
etc
also
come
in
picture,
then
the
problem
is
like:
if
you
call
it
a
problem,
then
it
is
magnified
or
if
you
call
it
a
level
of
abstraction.
We
go
probably
one
more
level
of
abstraction
above
saying
that
now
you
have
to
implement
the
scheduling
paradigm
together
with
multiple
other
constraints
in
place.
B
D
C
B
So,
first
milestone
that
I'm
targeting
is
being
a
little
disliked.
I'm
sorry,
yeah,
yeah
I
was
actually
taking
moments
before
so.
The
first
milestone
that
I
am
targeting
is
to
have
Minard
exactly
the
same
functionality
that
we
one
have
probably,
which
is
a
very
short-term
milestone.
I
should
be
able
to
do
or
finish
it
as
maybe
in
weeks
time
or
couple
of
weeks
time,
but
if
I
don't
face
any
problems.
B
Anything
like
that,
but
the
second
one
that
I
mentioned,
which
are
lies
in
the
domain
of
probably
policy
or
probably
placing
multiple
resources
of
different
types
together
or
applying
constraints
that
kind
of
stuff
there.
A
lot
of
thought
will
need
to
go
in
defining
the
API
itself
that
might
be
month
or
so
like
after
I
finished.
It
is
like
it's
like
that,
and
then,
if
we
are
talking
about
the
third
problem,
you
are
saying
where
resources
and
quotas
and
all
those
things
I
can't
really
estimate
that,
as
of
now,
it
will
be
later.
C
Just
one
commenter,
which
may
be
useful,
is
I
would
imagine
one
could
break
the
I
agree
with
Maru
that
it
is
a
difficult
problem
in
its
entirety,
but
I
think
we
can
perhaps
break
it
down
into
pieces
that
I'm
much
easier
to
think
about.
The
one
piece
is:
how
does
a
an
application
administrator
specify
that
they
want
a
group
of
things
to
be
in
the
same
clusters?
C
So
the
example
you
gave
I
want
my
deployment
to
go
into
some
subset
of
clusters
and
I
want
these
secrets
and
these
config
Maps
the
secret
in
this
context,
go
into
the
same
clusters,
so
I
think
just
specifying
that
is,
is
one
problem
and
I
think
that's
fairly
tractable
and
then
the
question
is:
how
do
we
actually
enforce
that?
How
do
we?
C
How
do
we
schedule
that
stuff
in
such
a
way
that
they
all
do
successfully
gather
into
a
set
of
clusters
that
that
adhere
to
the
set
of
constraints
that
the
application
admin
might
have
specified
and
I
think
I
think
to
make
that
work?
100%
in
all
cases,
guaranteed
is
impossible,
because
you
can
never
know
that
there
will
be.
C
You
know
a
set
of
clusters
which
fulfill
the
constraints
that
the
application
administrator
specified
and
which
all
have
the
right
amount
of
quota
of
the
right
types
in
all
the
right
places,
but
I
think
in
the
commonly
case.
Some
of
those
types
will
probably
be
more
difficult
to
schedule
than
others.
So,
for
example,
it
would
be
more
difficult
to
get
a
replica
set
to
deploy
correctly
into
a
given
cluster
than
perhaps
to
add
a
secret
or
a
config
map
to
that
cluster,
and
so
that
might
be
a
way
of
tackling
it
to
say.
C
Yeah
I
think
that
makes
sense.
Okay,
unfortunately,
with
I
think
with
the
new
like
decoupled
way
of
doing
these
things,
so
that
the
user
preferences
for
where
they
want
the
replica
set,
for
example,
are
decoupled
from
the
output
that
a
scheduler
or
a
human
might
make
to
say.
Put
this
thing
in
this
cluster
in
this
cluster
in
this
cluster
may
make
that
problem,
arguably
more
easy
to
think
about,
because,
as
a
first
pass,
we
might
actually
only
support
the
second
half
of
that
process
and
then
decide
how
to
step
back
one
level
and
say
right.
D
D
D
C
D
D
C
That's
great
yeah
and
I
guess:
if
services
and
load
balancers
or
ingress
are
important,
and
you
know
they,
they
need
to
have
pods
underneath
them.
Somehow
so
I
guess
deployment
would
be
the
natural
thing
to
go
along
with
those.
So
are
we
collectively
do
we
have
any
sort
of
rough
is
the
stuff
that
you
expect
to
have
out
by
the
middle
of
the
year.
Yeah.
D
Well,
I
should
say
that,
like
all
the
the
propagation
and
the
CRD
stuff,
like
that's
the
near
term,
and
maybe
integration
with
external
DNS,
which
I
think
Shashi
is
already
sort
of
started
on.
Oh
yeah,
I
work
for
Cuba
MCI,
it
sounds
like
Google
is
off
doing
stuff
and
hasn't
really
been
communicating
about
it
or
I
I.
We
haven't
heard
about
it,
but
my
my
guess
is
that
whenever
they're
doing,
moving
cube,
MCI
from
a
CLI
tool
is
something
this
controller
based.
D
Federation
will
be
able
to
have
some
way.
So
it's
not
like
we
want
to
do
most
of
the
work
it's
more
like
there's
something
that
can
be
used
separately.
What
I
am
and
similarly
I'm
external
DNS
Paul
has
a
proposal
that
he's
been
trying
to
he's
kind
of
really
tied
up
this
week,
but
the
goal
is
kind
of
doing
external
DNS.
What
those
federated
services
a
little
bit
differently.
Such
the
integration
with
like
and
as
your
or
GCP
or
AWS
DNS
solution
will
be
controller
based
and
not
necessarily
have
to
live
in
tree.
C
That
makes
sense,
I
think
that's
a
useful
approach.
I,
don't
think
we're
that
far
away
from
it
actually
I
mean
there's
there's
two
parts
to
it.
This
is
how
do
you,
when
you
express
like
the
stuff
that
you
want
to
be
in
the
DNS
and
currently
that
is
done
by
the
US
provider
API?
So
you
just
talk
to
the
API
and
you
said:
I
want.
C
You
know
these
records
in
DNS
and
it
doesn't
matter
which
flavor
of
DNS
you're
talking
to
the
API,
looks
identical
and
then
there's
the
actual
act
of
propagating
that
stuff,
which
so
those
two
are
currently
bundled.
Together.
You
tell
the
API
what
stuff
you
want
in
an
independent
way
of
independently
of
which
DNS
provider
you're
using
and
then
that
library
actually
propagates
the
stuff.
So
I
guess
the
proposal
is
to
pull
those
two
apart.
I
can
see
the
appeal
in
it.
C
C
It's
all
part
of
the
same
API,
but
it
would
be
totally
easy
to
write
a
DNS
provider
which
does
nothing
other
than
consume,
that
it's
a
server
for
that
API
and
stuffs
the
stuff
in
in
a
CD
basically,
and
then
have
another
piece
of
the
controller
that
reads
the
stuff
out
of
HCB
and
pushes
it
into
the
cloud
providers,
which
is
the
current
implementations
of
the
DNS
providers
for
I.
Think
we
got
four
or
five
of
them
now,
but.
D
E
E
E
D
E
And
not
quite
sure,
because
that
also
is
supposed
to
be
broken
down
into
DNS,
provided
as
a
separate
and
the
controller
also
separate
one.
But
right
now
it
is
all
covered
I
believe
so
I'm
not
quite
sure
like
how
we
can
convince
them
to
make
it
separate,
and
so
that
we
can
reduce
that
particular
code.
D
C
So
one
one
brief
comment
about
this
potential
change
of
direction
is,
is
the
one
benefit
of
of
what
we
have
right
now
is
that
deployment
is
pretty
straightforward,
so
you
build
currently
it's
it's
found
in
there's
a
DNS
library,
DNS,
controller
library,
I,
guess
you
would
call
it
that
is
part
of
the
federated
services
controller
and
that
in
turn,
links
in
all
the
cloud
providers,
I
believe
I'm,
sorry,
DNS
providers,
and
so,
when
you
deploy
this
thing,
all
you
do.
Is
you
deploy
a
generic
service?
C
It's
a
federated
service
controller
and
you
just
give
it
the
config
setting
that
says
which
DNS
provider
to
use
and
it
doesn't
think
if
we're
going
to
decouple
the
stuff
into
separate
controllers
and
have
them
in
different
repos
and
all
that
kind
of
stuff.
We
just
need
to
make
sure
that
someone
wanting
to
deploy
this
thing,
like
has
a
fairly
easy
path
to
saying
you
know,
I
want
to
use
DNS
provider,
X
and
I.
You
know,
do
it
and
not
have
them
have
to
jump
through
too
many
hoops
to
make
that
happen
agreed.
D
I
mean
my
expectation
would
be
that,
like
if
I
had
an
external
like
I
I,
would
configure
my
Federation
to
write
the
resources
that
these
things
would
consume.
I
would
have
a
deployment
for
like
both
Federation
controllers
and
extra
controllers
right
and
I'm.
Not
a
good
client,
but
I
would
have
a
series
of
like
configuration
anyway,
basically
just
say,
run
the
controller
for
zero
and
here's
the
configuration.
So
you
can
no
doubt
yeah.
D
C
C
You
know
four
or
five,
whatever
it
is:
DNS
providers
and
they're
linked
into
the
be
one
service
federated
service
controller,
and
if
you
want
to
use
one
of
those,
it's
trivial
and
and
there's
also
this
extension
point
where
you
can
use
any
of
these
external
ones
provided
by
whoever
they
may
or
may
not
be
stable,
and
if
you
want
to
implement
more
dns
providers,
that's
the
way
to
do
it.
We've
got
four
or
five
core
ones
and
outside
of
that,
there's
extension
points
to
hook
in
whichever
ones
you
want.
I
guess.
C
D
I
think
my
my
my
preference
would
be
for
having
everybody
on
a
level
playing
field.
I
think
there's
there's
consequences
to
having
some
things
baked
in
and
on
others,
not
because
then
there's
less
incentive
to
make
the
extension
points
really
good,
because
they're
not
shared
by
the
stuff
everybody's
using
okay,
but
we
can
decide
as
we
go
like
I
said,
yeah.
E
D
Like
variables
are
just
lists,
all
bunch
of
types
like
I
think
it
makes
sense
to
just
say:
we're
gonna,
fetter,
8ne,
cube
type,
trivially,
sorry
we're
going
to
propagate
and
then
we're
going
to
provide
scheduling
for
workload.
Types
like
deployments
of
your
applicants,
just
because,
like
the
types
I
think
obscure
the
underlying
functionality,
if
people
understand
they
Confederate
anything
and
then
we'll
get
special
to
havior
for
things
that
require
it
to
me,
that
makes
more
sense.
Fair.
C
Enough,
maybe
I'm
misunderstanding
some
of
the
stuff
I
mean
at
the
end
of
the
day.
We
we're
gonna
release
this
thing
at
something
quality,
the
boundary
presumably,
and
it's
going
to
be
able
to
federate
some
say
all
types
in
some
defined
way.
You
know
with
them
without
scheduling
with
you
know,
whatever
features
we
provide
and
if
answer
is
the
next
releases
can
have
all
of
them.
D
I
guess
you
do
that?
What
I'm
thinking
about
like
what's
required,
like
the
goal,
is
to
be
able
to
define
like
template
placement
override,
SCR
DS
and
then
register
them
such
that
they
can
be
propagated.
This
would
take
the
place
of
like
enabling
a
type
a
should
be
one.
You
would
just
register
it
and
you
could,
if
you
didn't
register
to
type
it
just
wouldn't
be
propagated.
D
There
is
a
little
bit
of
special
behavior
around
services
and
I'm,
not
sure
how
far
that
extends
to
other
things,
I'm,
hoping
that
it's
it's
a
rarity,
but
because
of
the
way
controllers
interact
on
services,
they
modify
the
spec
in
a
way
that
you
can't
just
overwrite
like
Shashi,
was
his
PR
kind
of
made
it
clear
that
sometimes
you
actually
have
to
look
at
the
cluster
resource.
Look
at
your
desired
object
and
actually
copy
things
into
the
desired
object
that
have
been
set
by
a
controller
in
that
cluster.
D
C
C
D
C
Yeah,
that's
that's
a
fair
point.
So
I
think
you
you're
thinking
primarily
about
the
former
case
and
I'm
I.
Guess
gonna
be
I'm.
Thinking
more
about
the
latter
case.
That.
D
C
D
C
The
other
stuff,
of
course
not
to
forget,
is
stuff
like
documentation,
I
mean
there's,
there's
a
huge
value
actually
in
these
blogs
and
documentation
and
making
sure
that
you
know
someone
who's
all
excited
about
this
stuff
can
dive
in
and
do
it
like,
quick,
quick
start,
just
follow
the
the
documentation
and
actually
get
something
working
and
kick
the
tires,
and
that
takes
time
I
mean
I
wrote
a
lot
of
that
stuff
in
the
past.
As
many
of
you
did,
and
it
does
take
time,
yeah.
D
C
D
Not
a
typo,
we
attempted
to
move
it
last
week.
I
know
I
think
you
were
last
week
and
now
mr.
Chen,
there
is
there's
some
sort
of
process
that
we
didn't
realize
that
if
there
is
more
than
one
contributor,
you
can't
simply
donate
a
repo
and
there's
some
question
like
it's.
It's
basically
in
the
hands
of
the
steering
committee
at
which
I
believe
you
remember.
I
am
indeed
maybe.
C
D
Could
investigate
from
that
side,
but
we
try.
We
actually
moved
it
over
to
kubernetes
6
and
then
Brian
grant
pushed
back.
D
C
D
D
B
B
B
And
while
many
Quinton
was
talking,
I
thought
it
might
make
sense.
For
example,
there
is
a
set
of
types
which
are
simple
types
and
we
sort
of
classify
them,
and
we
also
classify
that
we
two
are
are
whatever
focuses
or
the
work
is
might
also
propose
that
we
want
to
create
a
particular
type
like
it's.
Actually,
we
want
to
replicate
the
same
object
as
is
into
multiple
clusters.
B
D
I
would
suggest
that
any
given
type
can
be
propagated,
like
deployments
can
just
be
propagated
services
ingress
you
just
copy
configuration
it's
it's
like
I
want
to
be
able
to
have
this
other
behavior
beyond
propagation,
and
that
typically
will
require
a
controller.
So
it's
not
just
a
matter
of
scheduling.
As
you
say,
it's
more
like
if
I
want
this
advanced
behavior
I
probably
need
a
controller,
the
implemented
and
we're
working
on
a
controller.
It
supports
a
scheduling
for
deployments
or
programming
of
DNS
for
services
like
that
kind
of
thing,
good.
B
I'm
also
exactly
talking
something.
It's
like
in
more
item
says,
point
number
one.
What
I
have
listed
is
the
first
part
that
we
can
probably
list
out
a
proposed
mechanism
for
simple
propagation.
So
these
are
the
set
up
sort
of
steps
you
have
to
do
if
you
want
to
have
a
type
which
already
does
not
or
is
already
not
as
part
of
whatever
we
have
written
so
far.
You
do
this
set
of
steps
and
you
can
probably
have
achieved
the
functionality
of
fluent
obligation
so
but.
D
I
just
had
a
lot
of
thought,
you're,
talking
kind
of
prompted
me
to
think
that
just
in
terms
of
features,
rather
than
talking
like
I,
was
kind
of
when
quitman
started
listing
off
types
in
the
slide,
I
was
like
it
was
kind
of
rubbing
me.
It
was
kind
of
like
oh
that
doesn't
seem
right
to
me.
I
realized
what
I,
what
I
think
would
be
sort
of
easier
to
communicate
is
features
so
Penenberg
Federation
is
I
support,
propagation,
simple
propagation,
like
I'm,
just
copying
configuration
of
any
type
I
support
that
you
know.
C
D
Yes,
they're
just
listing
those
as
features
rather
than
his
VP
is
I
think
would
be
an
error,
and
if
someone
wants
that
feature,
we
can
document
how
you
would
configure
it
versus
how
do
I
support
employment.
So
what
do
you
mean
support
deployments?
Do
you
mean
simple
propagation?
Do
you
mean
scheduling?
You
mean
something
more.
B
It
does
it
does.
It
does.
That's
point
number
two
I
exactly
list
it
like
that.
Something
like
a
feature
which
is
like
scheduling,
implement
one
flavor
of
scheduling,
implementation
for
deployments
and
applicants
and
same
way
we
can
maybe
say
a
controller
or
a
programming,
a
controller
for
say
jobs
like
jobs
as
simple
jobs.
They
can
be
automated
now,
but
maybe
one
flavor
of
scheduling
jobs
might
be
which
we
think
like
the
implementing.
D
I
guess
my
suggestion
would
be:
is
breaking
out
like
scheduling
for
type
scheduling
for
type
and
I
mean
I'm,
not
saying
you
have
to
do
it
here,
but
in
terms
of
like
laying
out
all
the
features
that
Federation
has
rather
than
saying
scheduling
for
arbitrary
types,
because
there's
special
cases
around
likes
a
staple
sets.
So
it's
like
scheduling.
C
D
C
C
C
D
D
D
C
E
Yeah
I
think
when
we
tried
discussing
this
like
stainless
idly
will
be
consumed.
I
use
it
for
a
user
or
could
be
for
some
of
our
internal
controllers,
so
I
think
we
can
have
it
both.
That
was
the
things
that
is
stirred
by
Paul,
but
we
can.
We
can
look
back
again
at
that
particular
thing
like
what
matters
right
now
like
if
it
is
for
internal
controllers,
maybe
in
memory
data
would
be
sufficient.
E
I
believe
need
not
be
a
API
object,
but
for
a
user
I
think
it
makes
sense
for
user
to
get
that
data,
so
I
think
we
should
represent
that
in
an
API
object.
It
depends
again
from
case
to
case
like,
for
example,
the
replica
count
date
and
need
not
be
persisted,
but
sure
dealer
can
depend
on
that.
Particular
data
may
be
dynamically.
It
can
get
it
by
reading
from
each
of
the
clusters.
E
C
I
guess
there's
a
blurry
line
if,
if
we're
planning
to
implement
these
things
as
separable
control,
which
I
think
is
gonna
conceptually
being
on
the
on
the
diagram
up
to
now
at
that
point,
we
do
need
to
communicate
between
controllers.
You
know
if
the
scheduler
controller
is
separate
than
the
propagation
controller,
for
example,
then
the
propagation
controller
might
have
to
like
tell
the
scheduler
somehow
what
the
status
of
the
thing
in
the
cluster
is,
and
that
seems
to
motivate
for
putting
it
in
the
API.
D
The
I'm
not
entirely
sure
that
there
needs
to
be
communication
between
propagation
and
scheduling.
I
would
expect
a
scheduling
mechanism
Jordan
Farmar
and
maintain
its
own
state
of
the
underlying
clusters,
separate
from
propagation
say
something
like
this
I'm.
Just
not
sure
what
propagation
can
tell
the
scheduler
to
the
schedule.
C
C
If
it
is
there
already
and
has
like
a
different
set
of
replicas,
then
it
actually
needs
to
do
an
update,
because
a
create
will
fail.
So
it
needs
to
read
the
state
and
decide
whether
to
do
a
create
or
an
update,
and
so
once
it's
read
the
state
from
the
cluster,
and
it
already
knows
what
the
state
in
the
cluster
is,
which
it
has
to
know
in
order
to
propagate.
C
D
Shared
Informer
is
that
a
secure,
B,
Nettie's
mechanism
for
sharing
connections
and,
like
you,
basically
get
a
stream
of
events
and
anybody
can
listen
to
those
events.
So
it's
basically
a
dispatcher,
so
you
could
have
multiple
controllers
using
a
shared
Informer.
They
each
get
the
events
and
they
can
manage
the
estate,
whoever
they
want.
C
E
C
D
I'm
not
sure
that,
like
I
would
say,
implementing
a
shared
federated
informer,
I'm,
not
entirely
sure
to
be
huge
benefit,
because
oh
wait,
I'm
getting
things
mixed
up
a
sharin
federated,
a
farmer,
I
think
yeah
you're
right.
We
would
actually
want
that
right.
I
keep
getting
confused
as
to
what
its
targeting
targeting
the
underlying
clusters,
not
the
federated
API
yeah
yeah.
So
that
would
be
something
we
would
have
to
do
for
sure.
Yeah
okay,
I
mean
in
the
near
term.
I.