►
From YouTube: Kubernetes SIG Multicluster 20180605
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
D
D
D
D
E
A
A
I
hope
this
at
least
triggers
some
questions
on
some
details
of
how
we
are
doing
things
so
I'll
give
the
basic
motivation
why
we
are
acceleration.
So
CERN
has
a
large
amount
of
resources
for
processing
the
data
from
our
detectors
and
the
main
motivations
cover
things
like
periodic
load,
spikes
that
we
have
once
or
a
couple
of
times
a
year.
There
are
international
conferences
where
people
need
more
resources.
A
Sometimes
they
look
at
old
data
and
they
try
to
do
some
reconstruction,
so
they
need,
like
there's
periodic
spikes.
For
this
then
also
significant.
So
we
have
a
lot
of
custom
systems
that
we've
built
in-house
over
the
last
20
or
30
years.
We
do
custom
monitoring
for
all
of
these.
We
have
to
manage
the
lifecycle,
alarming
and
then
deployments
so
every
time
we
have
to
use
external
resources.
It's
a
great
comment
that
the
api's
are
different.
A
The
notion
of
the
resources
underneath
are
also
different,
like
yeah,
a
network
is
not
a
network,
the
same
network
everywhere.
So
the
fact
that
ranae's
offers
all
these
common
senses
is
why
we
are
trying
to
to
look
at
it
to
have
a
uniform
api.
So
the
to
many
use
cases
we've
looked
at
is
the
certain
batch
system,
which
has
something
like
350,000
course.
These
days
I
think
closely
under
thousand
and
then
recast
analysis,
which
is
a
more
forward-looking
cloud
native
way
of
doing
things.
E
A
Right,
so,
if
so,
actually
internally,
we've
been
using
kubernetes
for
a
lot
of
things.
We
have
something
like
two
hundred
to
process,
but
they
are
no
not
all
dedicated
to
batch.
What
we
do
is
we
partition
the
resources
on
basically
everything
we
buy
new
resources.
We
set
up
a
new
what
we
call
themselves
and
we
have
something
like
seventy
dedicated
to
pass
right
now.
So
that's
that
would
be
roughly
the
number
of
clusters
we
would
be
talking
about
in-house
cool.
E
A
My
questions
later
yeah
and
it's
very
good
coverage
now.
Actually
it's
a
good
follow-up,
so
so
CERN
is
what
we
call
the
tears
you
wear.
So
when
the
data
comes
from
the
detectors,
we
store
it
all
here.
First
and
then
we
do
copies
that
what
we
call
tier
ones,
which
are
pretty
big,
a
large
sized
centers,
also
so
that
those
are
meant
our
main
centers.
A
But
then
we
have
couple
hundred
smaller
centers
that
can
be
universities
or
laboratories
around
the
world
that
want
to
participate
in
this
program
and
over
the
years
over
the
last
10
or
15
years,
we've
connected
them
all
together
and
in
total
we
have
something
like
200
sites,
so
we
run
jobs
over
all
these
sites,
something
like
half
a
million
jobs
running
at
any
moment,
I
mean
total.
We
have
700,000
course.
A
The
main
thing
here
is
that
this
software
was
developed
over
the
years,
but
the
management
and
the
maintenance
of
these
sites
depends
very
much
on
on
who's
behind
them.
So
it's
kind
of
problematic
and
it's
kind
of
a
trend
to
move
to
like
a
smaller
number
of
sites
but
larger,
like
better
well-managed,
and
if
we
would
probably
design
it
today,
we
would
definitely
expose
a
common
API.
Something
like
what
kubernetes
is
doing
is
just
that.
It
didn't
exist
at
the
moment
at
the
time,
so
it's
kind
of
like
coronaries,
it's
kind
of
this.
A
A
So
I'll
jump
to
the
HD
thunder
celexa
Condor
is
the
main
back
system.
We
have
to
run
all
the
jobs
to
process
the
data
from
the
from
the
experiments,
so
it
has
four
main
components:
one
is
the
scheduler
which
takes
jobs
or
the
sketch
which
takes
jobs
from
from
users.
So
these
jobs
can
have
the
priorities
and
the
resources
they
they
need.
They
are
expressed
using
class
shots,
which
is
key
value
pairs
pretty
much
and
they
do
allow
like
fair
share
and
preemption.
A
A
What
about
the
number
of
CPUs
it
has
in
memory
and
then
there's
these
central
components
that
are
centralized
components
that
collects
all
these
classes
from
the
both
sides
and
be
negotiated.
That
does
the
matchmaking
that's
pretty
much
it.
This
is
handling
the
compute
part
for
access
to
storage
and
Dometic
and
management
of
networking.
It's
not
done
by
Condor.
So
it's
it's
expected
that
the
infrastructure
somehow
advertises
is
appropriately
yeah.
A
So
when
we
started
looking
at
federating
this
because
the
scatter
collector
in
a
negotiator
a
kind
of
centralized,
we
started
by
federating
the
starti,
and
this
is
what
we
we
added
to
the
kubernetes
definition.
So
we
basically
declared
it
as
a
daemon
set.
So
it
appears
on
whatever
resources
appear
in
the
Federation
just
automatically
launch
there,
so
that
that
was
using
Federation.
Do
you
want?
But
this
is
the
first
use
case.
We
looked
at
it's
kind
of
taking
an
existing
system
and
trying
to
make
it
cloud
native
using
common
areas.
E
E
A
So
in
the
current
deployment,
this
is
the
case,
so
they
will
oak
call
to
a
centralized
collector
here
running
here
at
CERN.
It
might
be
that
in
the
future
we
would
go
for
something
a
bit
more
complex,
but
for
now
this
is
what
now,
if
we
look
forward,
then
we
could
think
of
all
the
system
using
native
kubernetes
also
for
the
job
management,
and
this
is
what
we've
been
doing
with
the
second
use
case
and
I.
A
C
I'm,
actually
a
physicist,
and
so
this
is
kind
of
a
first
physics,
application
of
kuben
ease
and
container
technology.
So
the
idea
he
is
there
so
typically
what
a
graduate
student
crates
during
their
thesis
time
is
data
analysis
pipeline
and
so
we're
building
a
service
called
recast
that
kind
of
uses
these
existing
and
archived
data
analysis
pipelines
in
order
to
test
new
theories
of
fundamental
physics.
C
We
use
kubernetes
to
do
that,
so
we
have
a
workflow
engine
called
the
adage
and
that
kind
of
has
a
control
loop
built-in
that
keeps
track
of.
You
know
what
jobs
need
to
be
run
in
what
order,
and
then,
if
we
have
jobs
that
we
need
to
run,
we
submit
them
to
a
coup,
Benitez
cluster
or
in
this
case,
to
a
Federation
of
kubernetes
cluster.
So,
on
the
right
hand,
side
you
can
see
a
visualization
of
a
work
flow
graph.
C
You
know,
then
we
also
show
that
cube
con,
and
so
each
node
in
this
direct
directed
acyclic
graph
is
a
coup
Benitez
job
that
runs
somewhere
in
the
Federation.
And
so
this
job,
you
know,
does
several
things.
So
it
stages
in
the
data
then
runs
the
main
payload
and
then
stages
out
the
results,
and
then
we
have
just
that
runs
over
deadens.
So
since
this
is
kind
of
you
know
fundamentally
dependent
on
containers,
we
kind
of
skipped
this
entire
contour
part,
and
you
know,
used
kuba
nice
jobs
natively
and
submit
to
the
Federation.
A
A
F
A
Started
looking
at
so
we
already
fed
the
rate
the
internal
clusters
in
one
single
coronaries
Federation.
Then
we
started
looking
at
adding
external
sites
which
pose
additional
challenges,
so
we
tried
using
Amazon,
gke
and
sure
and
a
couple
of
other
public
clouds,
plus
some
European
ones,
and
this
kind
of
validated
the
idea
that,
if
we
use
the
kubernetes
as
the
common
API,
actually
it's
very
easy
to
integrate
external
resources.
And
this
this
is
true.
A
Regarding
compute,
this
is
not
necessarily
true
regarding
storage,
because,
like
storage
is
kind
of
there's,
a
big
separation
between
resources
and
the
storage
resources,
the
way
we
do
it
so
for
experiment
software.
This
is
this-
is
kind
of
solved
by
using
both
containerized,
like
what
lopez
was
saying
container,
rising
all
the
experiments
software,
but
also
in
the
case
that
we
need
to
propagate
it
to
the
local
sides.
We
have
an
internal.
A
We
have
a
system
that
uses
with
theoretical
squid
caches
and
we
just
deploy
an
additional
squid
squid
cash
using
Canaries
at
the
public
cloud
that
will
just
fetch
data
from
from
the
top
level
one
here
at
CERN
for
the
physics
data.
We
don't
have
a
solution
right
now,
so
this
is
a
much
larger
set
of
data.
It's
a
few
hundred
petabytes
and
we
don't
really
know
how
to
easily
propagate
this
to
the
public
cloud.
So
I
mentioned
here
you
it's
kind
of
important
for
us
how
to
solve
this
issue.
A
Eventually,
I
have
a
couple
of
backup
slides
regarding
how
we
use
today,
the
Federation,
so
I'll
go
through
it
quickly.
So
basically,
this
is
the
this
would
be.
The
host
cluster
for
the
Federation
and
all
we
do
is
is
could
fit
in
it
using
the
existing
tools
for
Federation
v1,
and
this
runs
the
sketch.
So
this
this
example
for
condor,
not
four
four
four
recast,
the
example
for
condor
would
be
this.
It
wants
to
sketch
the
collective
negotiator,
we
start
the
federation
and
we
launched
using
hound
charts.
A
We
launched
the
sphere
and
then
for
each
six,
local
or
external
cluster,
where
we
have
a
kubernetes
cluster.
We
just
join
it.
So
in
this
case
I
gave
an
example
with
two
at
cern
and
one
in
external
cloud,
as
I
mentioned,
to
start
be
simply
a
daemon
set,
and
they,
as
was
mentioned,
they
are
all
pretty
much
the
same.
So
it's
pretty
easy
to
do
this,
so
this
is
just
description
also
using
a
hound
charts
and
the
last
bit
is
the
storage.
F
A
Pull
the
software
as
its
required
by
the
jobs
and
there's
a
like
a
warm-up
phase
to
get
the
current
software
being
used
the
popular
software
opinions,
but
but
then
it's
actually
pretty
fast
once
we
get
it
in
now,
all
this
infrastructure
for
recast,
we
don't
need
it
as
locals
was
explaining
once
we
have
everything
defined
as
coronaries
jobs.
We
don't
need
this
infrastructure
deployments
or
you
can
use
straight
in
jobs,
but
for
for
many
of
our
existing
systems
will
will
have
to
to
do
more
detail
deployments
like
this
and
I.
B
A
So,
for
the
condor
part,
this
is
so
the
job
education
on
the
condor
part.
Is
it's
not
really
managed
by
:?
So
as
long
as
the
starti
has
the
proper
configuration
to
talk
to
the
collector,
that's
fine,
when
we
do
actual
jobs,
maybe
Lucas
can
give
more
information,
but
I
think
we
are
using
one
single
account
that
is
taking
everything
yeah.
C
A
All
right
thanks,
so
one
of
the
things
we
are
looking
right
now
is
integrating
the
air
bath
with
with
our
own
certain
identity
or
some
kind
of
federated
identity
that
we
might
have
also
around
here.
But
initially
it
would
be
certain
identity,
because
it's
kind
of
expected
that
that
everyone
running
or
jobs
here
will
have
a
certain
account.
E
I
had
another
question
about
your
word:
federated
jobs,
which
seems
like
a
very
interesting
and
pretty
big
large-scale
use
of
Federation.
So
enough,
federated
jobs
in
v1.
We
had
some
fairly
fast
features
for
dynamically
scheduling
jobs
between
clusters,
and
you
know
pushing
more
workload
into
the
faster
clusters,
etcetera
to
complete
the
job
quicker.
Did
you
use
any
of
that?
Or
could
you
perhaps
just
go
into
a
little
more
detail
about
how
the
jobs
get
federated
across
multiple
clusters.
A
So
in
name
for
now,
we
didn't
do
any
kind
of
prioritization
of
the
jobs.
I
wasn't
even
aware
that
was
possible
with
you.
We
didn't
try
it,
so
we
were
kind
of
just.
It
was
a
simple
key.
Something
we
will
have
to
do
is
to
have
some
some
fair
share
between
the
different
experiments.
It's
not
something.
We
tried
until
now,
yeah.
G
The
expectation
is
like
they'll,
be
something
provided
by
Federation,
but
it's
evening
special
capabilities.
You
need
to
be
your
environment
or
use
cases,
you'd
be
able
to
implement
your
own
scheduling
mechanism
and
basically
right
placement
or
override
resources
and
how
the
propagation
and
still
work
you
don't
have
to
write
the
propagation
over.
A
E
The
way
the
way
we
implemented
it
in
v1
and-
and
we
can
certainly
talk
to
you
guys-
we
you
know-
we
based
it
on
first
principles
rather
than
any
specific
use
case,
but
the
basic
principle
is
that
it's
assumed
that
there
is
some
common
work
queue
that
all
the
clusters
chemical
work
items
off
and
then
we
divvy
up.
You
can
actually
specify
an
original
waiting
of
clusters.
So
it'll
put
you
know
more
or
less
items
to
be
completed
in
each
cluster,
but
then
it
will
automatically
reallocate
between
clusters.
E
If,
for
example,
you
know,
work
is
not
successfully
being
done
in
a
cluster
for
whatever
reason-
maybe
it's
misconfigured
or
maybe
it
doesn't
have
capacity
or
whatever,
then
the
work
items
will
be
transferred
to
another
cluster,
which
is
making
faster
progress
and
the
theory
being
that
the
ultimate
goal
should
be
to
get
the
work
done
as
fast
as
possible,
but
it
doesn't
currently
have
enough
intelligence
to
do
things
like
get
the
work
done
as
cheaply
as
possible.
If
that
was
a
constraint.
Okay,.
A
F
E
Yes,
I
think
so
I
mean
when
I
say
easily:
it's
not
like
a
configuration
parameter,
but
but
I
think
there
would
be
a
fairly
simple
add-on
or
in
the
worst
case
you
could
build
a
custom
scheduler
to
do
that.
If
you
knew
you
know
what
the
cost
of
processing
a
work
item
in
a
given
cluster
was
you
could
get
it
to
you
know
and
and
I
guess
it
would
be
a
trade-off
between
cost
and
time
examine.
E
E
The
other,
the
other
thing
that
strikes
me
as
potentially
useful
to
you
you
already
mentioned
it
is-
is
you've,
got
a
lot
of
data
flowing
between
these
clusters
and
and
figuring
out
how
to
do
that
best
I
mean
you
know,
a
very
simple
approach
of
having
squid
proxies
or
something
in
each
cluster
and
just
pulling
it
all
out
of
some
central
data
stores
is
one
option
and
it
might
might
actually
work
fine,
but
they
you,
you
could
also
do
things
like
you
know,
restore
snapshots
into
particular
clusters,
etc.
Right.
A
So
so
for
the
software
we
know
this
works
because
we're
not
talking
about
like
multiple
petabytes
but
for
the
physics
data.
Actually,
it's
it's
an
unsolved
issue
for
for
the
external
clouds.
The
way
we
do
it
right
now
is
using
like
a
distributed
sewer
system
that
we've
built
over
time,
but
but
it
only
works
because
we
know
the
sites
participating
very
well
I,
don't
know
how
easy
it
would
be
to
extend
this
to
public
clouds.
So
it's
something
we
are
looking
at
and.
A
So
that's
a
good
point.
One
thing
that
we
didn't
also
is
that
the
jobs
are
independent
from
each
other,
so
like
in
in
the
batches
case,
each
job
will
process
a
large
chunk
of
theta,
but
it
won't
need
to
communicate
with
the
other
job.
So
that's
that's
an
important
point.
The
other
one
is
that
the
data
is
produced
here
at
CERN
and
pushed
into
our
system
and
then
it
kind
of
propagates
to
the
different
sites.
So
this
is
true
for
for
the
way
we
do
batch.
A
It's
not
necessarily
true
for
what
Lucas
was
explaining,
where
you
have
a
workflow
where
there,
even
if
the
jobs
are
not
talking
to
each
other,
they
actually
have
to
share
the
results
with
the
next
step
on
the
workflow.
In
that
case,
maybe
Lucas
can
give
more
details,
but
we
we
are
talking
about
not
large
quantities
of
data,
I
think,
but
we
are
looking
at
using
s3
as
a
common
API
for
that,
maybe
he
can
get
more
yeah.
C
So
you
I
mean
you
need
some
way
of
sharing
the
data
and
I
mean
in
existing
systems.
We
have
just
a
big
replication
service,
and
so
somewhere
the
jobs
you
know
stage
and
data.
So
they
fetch,
like
the
piece
of
the
data
set,
that
they
want
to
work
on.
Do
the
work
and
then
push
out
the
data
and
it's
kind
of
conceptually
similar
what
we
would
do
and
works
okay.
C
This
open
issue
on
kubernetes
have
maybe
differ
containers,
because
this
is
a
very
common
pattern,
at
least
in
scientific
workloads,
to
have
these
three
stages,
and
that
would
simplify
a
lot
and
basically
then
the
logic
that
the
staging
and
the
state
do
this
arbitrary.
So
it
could
talk
to
us
three.
It
could
to
talk
to
some
other
replication
service
that
is
than
an
implementation
you
to
know.
A
E
F
F
You
can
do
only
some
limited
parallelism
move
the
jobs
across
if
they
are
failing
into
cluster
and
that
kind
of
stuff,
but
it
seemed
like,
like
you
have,
you
might
have
some
more
extended
use
cases,
especially
based
on
cost
or
time
some
constraints
like
that,
so
that
input
might
be
useful
to
define
the
job
scheduling
api
that
I
osha
she
actually
are
going
to
tackle
next.
So
if
I
mean
we
can
get
in
sync
or
you
can
provide
that
input,
what
exactly
is
it
that
might
be
beneficial
to
you?
F
E
Now
that
that's
great
and
I
think
we
might
have
chatted
about
that.
Just
for
you,
so
a
quick
question
for
the
CERN
guys-
and
you
don't
have
to
answer
it
now,
but
one
is
obvi.
Presumably
did
some
work
internally
to
make
all
of
this
work,
and
some
of
it
might
have
involved
changes
to
either
kubernetes
or
cluster
Federation.
Is
there
any
of
that
that
you
would
be
interested
I
want
to
open
source
or
push
upstream
that
you
think
might
be
useful
and
secondly,
going
into
the
future?
Is
there
any
area
it
sounds
like
these?
E
A
So
yeah
definitely
so.
This
is
something
we
I
think
many
of
you
might
know
that
we
have.
Our
infrastructure
is
deployed
using
OpenStack,
so
the
clouds,
the
private
cloud
we
run
is
using
OpenStack
and
we
kind
of
run
kubernetes
on
top
right
now
and
we
are
very
used
to
contributing
upstream
for
OpenStack.
A
So
we
are
kind
of
reorganizing
ourselves
to
dedicate
more
time
to
kubernetes,
and
we
we
just
like,
started
a
project
where
we
want
to
engage
more
with
the
community,
and
this
will
include
definitely
definitely
providing
code
regarding
local
changes.
We
had
to
do
to
kubernetes.
So,
as
you
mentioned,
storage
is
very
important
for
us,
so
we
actually
wrote
a
suicide
driver
for
this
system
that
we
use
for
the
heretical
squid
caches,
which
is
CDM
FS.
It's
called
silly
on
face,
so
we
wrote
the
CSI
grammar
for
that.
A
We
wrote
also
seaside
driver
force
ffs,
which
is
also
another
system
that
we
use
to
share
data
among
clusters
when
the
clusters
are
running
only
at
CERN
because
outside
we
can't
so
we
use
s3,
but
internally
we
actually
wrote
our
own
suicide
driver.
So
we've
been
writing
all
these
pieces.
That
integrate
with
kubernetes
and
eventually
yeah
I
guess
will
will,
as
you
mentioned,
will
start
contributing
more
to
the
poor.
A
E
Excellent
yeah
storage
is
federating.
Storage
is
an
area
we've
given
relatively
less
attention
to
then
the
other
areas
up
to
now-
and
it
sounds
like
you
guys-
have
some
very
hard-won
experience
with
some
of
the
biggest
data
sets
in
the
world,
so
be
great
to
put
our
heads
together
and
see
what
we
can
come
up
with
there.
Yeah
can.
H
H
C
So
typically,
so
there
is
like
two
separate
issues
with
the
data
staging.
So
there's
like
an
out
of
scope
issue
of
what
data
the
center
had,
and
you
know
what
machines
have
what
data
local,
but
then
there's
also
the
in
scope
issue
of
staging
the
data
into
the
container,
and
that
should
happen
relatively
quickly
if
the
data
is
already
on
the
physical
machines
and
then
the
main
bulk
of
the
job
would
be.
C
A
C
E
All
right
should
we
call
it
a
wrap
on
that
thanks.
That
was
a
very
educational
presentation.
We
really
look
forward
to
working
with
you
guys
feel
free
to
so
for
those
who
may
not
be
familiar,
we
kind
of
have
two
main
sync
ups
with
regards
Multi
cluster
in
general,
we
have
this
one
which
happens
every
two
weeks,
and
this
covers
all
things
related
to
multi
cluster.
So
right
now
we
have
cluster
registry.
F
F
E
F
So
that
s
Quinton
already
mentioned
that
the
three
sub
projects
as
part
of
multi
glacier,
r-cube
MCI-
that's
a
multi
cliche
ingress,
it's
mainly
the
offered
by
Google
fo
and
the
multicast
rank
rests
as
of
now
the
tool
that
is
implemented
works
for
the
Google
Cloud.
Only
Kesha
registry
is
sort
of
stand-alone
registry
of
clusters.
F
It's
only
a
registry
and
it
does
not
hold
the
earthen
path.
Information
about
the
clusters,
so
it's
there
are
there
are
there
have
been
concerns,
so
there
has
been
use
cases
where
such
a
registry
would
be
useful
for
clusters,
not
just
romantic
pleasure
use
cases,
but
for
any
other
use
case,
which
might
be
relevant
to
creators.
F
Federation
was
I
mean
this
SIG
earlier
was
Federation,
and
that
was
the
only
project
that
the
sickbay
ceccolini
lit,
so
condition
even
is
the
implementation
of
which
we
called
even
right.
Now
is
the
implementation
of
what
local
collaboration
and
a
part
that
happened
until
last
year,
after
which
the
scope
of
the
sig
was
widened
and
it
was
made
as
multiplexer.
We
right
now
also
have
also
evolved
that
citation
API
into
something
more
become
possible
which
can
be
implemented
separately,
the
controller's
of
which
can
employ
in
we
implement
it
separately
and
sort
of
a
layered
approach.
F
Also
such
that
reference
implementations
of
a
base
layer
which
does
only
a
sink
and
little
sink
and
propagation
of
for
any
given
cater
state
and
a
higher
layer
which
can
which
can
implement
higher
level
controllers
for
specific
scheduling
types
or
for
any
other
type,
for
example,
for
job
also,
it
will
be
a
higher
level
controller
which
can
make
or
utilize
the
lower
levels
in
controller
and
implement
the
necessary
use
cases.
So
just
just
about
the
three
projects,
I
guess
it'll
be
beneficial.
F
I
This
is
Jonathan,
I
can
talk,
hello,
I
can
talk
about
cluster
registry.
I
do
unfortunately
not
have
that
much
context
on
the
work.
That's
being
done
for
cube,
MC
I,
don't
leave
that
anybody
was
on
the
call
blue.
How
who
is
more
familiar
with
that
effort,
but
I
will
at
least
talk
about
the
cluster
registry.
Okay,
that's.
I
The
cluster
registry
has
there's
been
a
little
small
flurry
of
work
in
the
past
few
weeks
to
get
the
cluster
registry
API
more
ready
for
a
beta
release.
A
few
weeks
ago,
the
API
was
updated
from
using
an
extension
API
server
to
be
CRD
based
API,
which
was
a
significant
simplification
to
the
amount
of
code
in
the
repository
and
the
amount
of
machinery
necessary
in
order
to
set
up
a
cluster
registry,
both
I
think
in
the
bootstrapping
use
case
and
in
the
more
general
trying
to
run
it
as
a
production
use
case.
I
Okay,
okay
may
mean
something
in
one
environment,
but
me
means
may
mean
something
specific
in
two
different
environments,
but
should
generally
mean
that,
for
this,
given
environment,
you
can
get
to
the
cluster
you
can
interact
with
this
cluster.
This
is
okay,
I
think,
with
these
changes
in
I
am
prepared
to
move
the
API
to
beta
I
wanted
to
let
the
changes
sit
for
say
a
little
bit
of
time
in
order
to
make
sure
that
nobody
had
come
and
had
some
more
suggestions
or
arguments.
The
other
change
that
I
made
was
I
modified.
I
The
way
that
the
cluster
object,
references,
authentication,
information
from
being
a
map
to
being
an
object
reference.
So
now
it
should
be
possible
to
say
created
instead
of
just
having
to
put
the
information
directly
on
the
object.
You
create
an
object
reference
to
say
a
secret
or
some
other
object,
definition
elsewhere
in
your
cluster.
That
could
be
in
a
separate
namespace
that
defines
the
authentication
information
that
you
want
to
use
and
there's
also
now
two
separate
object.
I
References
one
meant
to
be
used
by
controllers
or
robots
that
want
to
interact
with
that
cluster
and
one
that
is
meant
to
be
used
by
users,
which
should
hopefully
make
that
delineation,
clearer
and
easier
for
people
who
are
attempting
to
build
automated
tools
to
not
stamp
on
something
that
a
user
wants
to
do
with
their
class.
The
cluster
in
the
cluster
registry
to
store
credentials
for
themselves
with
these
changes
in
place,
I
think
the
object
is
in
a
good
state
to
go
to
beta.
If
anybody
has
any
concerns
or
questions,
please
go
take
a
look.
I
Everything
is
there
in
the
repo
now
I
just
cut
a
V,
zero
point,
zero
point,
six
release
which
has
all
of
these
changes
other
than
that
I.
Don't
think
there
is
much
of
a
status
update
the
one
one
sort
of
related
point
which
I
sent
a
note
out
to
the
sig
lists
a
few
weeks
ago
about
which
I
would
like
to
circle
back
on.
I
I
One
thing
to
one
special
thing
to
note
about
these
namespaces
is
that
they're,
particularly
not
ever
meant
to
be
replicated
between
clusters?
So
if
you
were
running
say
a
federation
or
some
sort
of
analog
to
Federation,
where
you
were
mirroring
things
between
namespaces
that
tool
should,
by
convention,
avoid
mirroring
the
contents
of
the
cube,
Multi
cluster
system
and
cube
multi
cluster
public
namespaces.
I
E
Question
Jonathan,
it
seems
like
a
reasonably
common
use
case,
would
be
to
want
to
propagate
a
common
list
of
clusters
across
more
than
one
cluster
so
that
all
jobs
had
local
access
to
that
cluster
list.
Is
that
so
I
guess,
given
what
you
just
said
about
not
federating
these
things?
How
do
you
envisage
that
being
done?.
I
That's
a
good
point:
Quentin
we
hadn't
particularly
considered
that
use
case.
We
had
been
considering
much
more
the
use
case
at
a
central
cluster
list,
rather
than
one
that
is
propagated
across
all
the
clusters.
I
think
yeah
I
think
we
had
a,
but
that
is
something
we
could
consider.
I,
don't
know
how
we
would
want
to
define
that,
but
that
is
something
we
should
probably
consider
more
perhaps
later
or
offline
I'm.
G
I
Yeah
that
makes
sense
and
I
think
I
think
in
an
ideal
world
where
I
think
if
this
were
a
more
centralized.
If
this
project
were
more
centralized,
we're
under
the
auspices
of
say
one
organization,
it
may
be
easier
to
to
have
to
propose
a
model
for
having
a
federated
system
namespace
but
I.
Think
given
the
potential
for
disparate
developments
and
for
people
to
develop
their
own
solutions
even
outside
of
the
sig
having
this
basic
understanding
I
think
will
help
people
at
least
give
people
some
assurance
of
oh
I.
Can?
Where
should
I?
G
Yeah,
that
makes
sense
I'm
I
guess
I
was
just
extrapolating
to
like
the
cube
system.
Namespaces
Federation,
like
the
push
reconciler,
is
special
casing
those
and
just
not
touching
them.
What
I
could
see
in
the
future
wanting
to
say
federate,
our
back
rules
or
other
administrative
configuration
to
member
clusters,
but
you're,
probably
right
that
that's
probably
not
the
case
for
Federation
system
namespaces,
since
it's
unlikely,
you're
gonna
want
like
multi-tiered
Federation
I
mean.
I
I
D
F
F
So
the
main
decision
to
take
up
of
Federation
be
to
effort
was
to
decompose
the
Federation
API,
so
the
earlier
API
was
based,
or
it
was
just
the
same-
API
Kas
API,
which
was
given
special,
meaning
infiltration,
controlling
using
annotations
and
that
kind
of
stuff
in
Federation
we
to
it,
we
I
mean
we
all
thought
that
it
would
be.
It
would
be
better
that
we
have
first-class
API,
which
defines
fields
of
which
defines
information
which
is
needed
for
Federation
control
plane.
F
D
F
Further
to
that
recently,
a
model
that
we
have
also
I
mean
B's
view
of
the
model,
as
is
that
there
is
some
which
is
some
necessary
individual
objects,
for
example,
per
object
of
caters
say
we
have
a,
we
have
a
secret,
so
there
is
a
we
define
them
as
template
placements
and
overwrites,
and
this
is
enough.
We
say
to
federate,
given
K
test
type
and
recently
Meru
and
I,
when
I
guess
have
extended
the
same
implementation,
which
can
be
you
clean
eyes
for
ceoddi
types,
also
or
the
user-defined
types.
F
So
this
is
this
we
sort
of
call
earlier
and
on
top
of
that
specific
use,
cases
or
propriety
use
cases
can
be
built
which
we
call
higher
level
controllers
or
high
level
API.
So
our
example
would
be
a
cooling
type
or
the
job
is
a
very
specific
use
case
for
that
kind
of
that
kind
of
a
API
or
and
I
can
associate
controller.
So
so,
currently
we
are
here
and
a
future
birth
that
we
are
thinking
of
for
going
ahead.
We
can't
talk
about
a
very
long-term
kind
of
a
feature.
F
We
can
talk
about
short
term,
maybe
a
couple
of
months
ahead,
so
we
have
one
of
the
goals
that
we
have
is
that
we
can
move
to.
We
have
implementation,
as
of
now
that
implementation
is
its
aggregated.
One
of
our
goals
is
that
we
ideally
should
be
able
to
move
for
too
Ciardi
based
implementation,
so
that
we
do
away
the
requirement
of
API
server
a
separate
API
server
for
these
APs
and
Huawei
perspective
like
there
are
two
major
contributors
right
now.
F
There
is
Red
Hat
who
is
contributing
and
there
is
Huawei
which
is
contributing
so
for
Huawei.
It
is
also
important
that
we
have
high
level
controllers
and
some
amount
of
feature
parity
that
we
had
in
conditioner,
even
for
redhead.
I
cannot
really
talk
like
what
might
be
necessarily
important
so
yeah.
F
G
It's
part
of
the
reason
for
decomposing
and
breaking
things
down
into
smaller
pieces
is
to
a
lot
of
solutions
to
be
composable
rather
than
monolithic,
and
also
to
encourage,
or
at
least
enable
third
party
contribution.
You
know
kind
of
lowers
the
barrier
to
entry.
If
you
don't
have
to
work
with
a
monolith
and
you
can
work
with
a
fairly
simple
set
of
well-defined
and
well-documented
API
is
so
our
hope
is
to
basically
form
a
community
by
making
it
easier
to
contribute.