►
From YouTube: Kubernetes SIG Federation 20170801
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
there's
been
this
planning
around
multi
cluster,
what
has
become
known
as
multi
cluster
and
an
in
in
person
meet
up
happening
next
week,
I
believe
and
Red
Hat
put
together
what
I
thought
was
a
very,
very
good
set
of
slides
on
their
required
use
cases
that
they
would
like
to
support
with
relatively
high
level
set
of
requirements
which
don't
necessarily
mandate
the
use
of
Federation.
But
they
do.
A
Please
read
that
if
you
haven't
yet
and
add
your
comments,
I
think
that's
a
very
useful
platform
on
which
to
have
a
discussion
that
we
planned
out
next
week
in
particular,
I
think
it'd
be
really
useful
if
Google
in
particular,
who
has
some
thoughts
around
alternative
approaches
and
use
cases
that
they
would
like
supported
to
either
put
together
an
equivalent
similar
document
or
to
annotate
the
existing
document
with
whatever
it
might
be
missing.
That
Google
would
like
to
support
any
more
comments.
Questions
around
the
multi-class,
the
discussion.
B
C
Also
I
had
some
questions
if
anyone
had
concerns
or
wanted
to
add
something,
and
you
want
to
remove
something
from
the
agenda
next
week,
so
the
draft
agenda
is
in
the
invite.
So
if
you
want
something,
that's
not
in
that
draft.
Please
note
it
now
so
that
we
have
a
chance
to
like
actually
make
time
and
coordinating
most
everybody
yeah.
A
C
A
You
could
just
add
a
add
a
link
to
the
document
in
the
notes
there
sure
yeah
awesome
all
right,
I
don't
want
to
spend
too
much
time
on
the
design
review.
Stuff.
I,
don't
want
to
just
make
sure
that
everyone's
sort
of
aware,
roughly
what
the
status
of
everything
is
and
given
anyone
the
opportunity
to
fall
out
blockages
or
anything
that's
being
held
up
so
there's
the
Federated
read
access
design,
which
has
been
through
quite
a
bit
of
review
by
a
variety
of
people
Jim
who
is
actually
implemented.
A
Most
of
that
I
believe
I'm,
not
sure.
If
he's
here
today,
but
last
I
checked
in
within
a
couple
of
weeks
ago.
He
had
most
of
most
of
it
done.
You
can
still
make
some
input
there.
If
you
want
to
I
think
most
of
the
interested
people
have
at
least
made
some
comments
and
the
documentation,
but
it's
if
you
have
any
more
feedback,
it's
you
know
the
coder
hasn't
been
yet
so
in
a
relatively
minor
changes,
etc
can
be
accommodated
per
user
class
authorization.
A
A
I
won't
go
into
the
details,
but
suffice
to
say
someone
without
the
authority
to
do
something
can
send
a
request
which
should
not
succeed
in
a
given
cluster
and
would
not
if
it
was
sent
through
as
them,
but
then
it
can
end
up
being
piggybacked
on
the
back
of
someone
else's
request.
Who
is
authorized
to
make
updates
and
that's
that's
a
kind
of
untenable
situation,
so
we
have
to
figure
out
a
way
around
that
and
we
don't
have
a
good
answer
to
it.
Yet
there
are
few
other,
you
know
somewhat
less
major
problems
like.
A
Think
there
was
general
consensus
that,
rather
than
try
and
come
up
with
one
true
way
of
doing
it
and
forcibly
do
that,
work
towards
giving
people
the
option
between
it
seems
like
there
are
two
reasonable
options,
both
of
which
would
have
pros
and
cons
and
give
give
at
strattice
ultimate
choice
between
the
two,
with
the
appropriate
guidance
and
explanation
as
to
the
pros
and
cons
of
the
two
different
approaches.
Is
that
a
for
the
rest
of
you
who
there
is?
Is
that
a
reasonable
summary
of
where
we
stand
there.
B
A
D
D
A
That
may
be
my
fault,
it
wasn't
a
general
call
for
and
I
guess,
I
guess
in
general,
we
should
figure
out
what
I
guess
it
should
have
perhaps
being
more
widely
communicated,
perhaps
by
the
sig
that
it
was
occurring.
It
wasn't
necessarily
a
call
for,
like
everyone
to
descend
upon
Google's
campus
and
make
input
so
much
as
get
the
specific
people
who
had
done
the
design
and
were
actively
involved
in
implementing
it
to
you
know,
work
through
the
final
issues.
A
D
Know
I
don't
know,
I
would
say
this
is
an
example
of
people
not
being
able
to
participate
because
they
don't
actually
know
about
it,
and
I
would
hope
that
we
could
avoid
the
situation
in
the
future.
I
think
the
whole
point
of
being
an
open,
sore
development
effort
is
that
we
encourage
and
facilitate
participation
in
this
case
it
doesn't
really
sound
like
that
happen
so
and
I
think
that's
like
you
keep
like
just
for
future.
We
can
maybe
try
and
avoid
that
I.
A
Think
that's
fair
criticism
and
I
can
take
responsibility
for
that.
I
guess:
I,
just
Nikhil
ask
for
help
I
offered
to
help
Nikhil
and
that's
sort
of
how
it
happened.
But
but
yes,
you
I
think
you're
absolutely
right.
It
could
have
been
more
widely
publicized
and
other
people
could
have
participated.
Point
taken,
Thanks.
E
I
planned
it,
and
then
you
want
to
say
give
us
a
very
brief
update
on
the
status
of
that
design.
Implementation,
design,
wise
whatever
feedback
of
Ament
that
I
did
get
from
the
community
on
the
PR.
I
have
tried
to
take
the
feedback
and
update
the
design
as
much
as
I.
Could
implementation
wise
I
could
raise
the
API
Speier,
probably
this
week,
and
the
controllers
and
all
can
follow
okay,
so
the
implementation.
A
E
E
One
a
point
which
will
need
to
precede
this:
that
is
a
decision
that
we
need
to
do
this
on
top
of
the
same
controller
or
a
stand-alone
controller
itself.
If
you
do
it
on
top
of
sink
controller,
I
would
need
to
do
a
couple
of
peers
which
modify
that
in
some
fashion
and
I'll
need
some
Ruby
for
review
from
probably
Maru
on
that
and
given.
F
F
A
A
E
D
A
D
A
Not
only
that,
but
you
know
if,
if
the
framework
for
for
the
stuff
is
insufficient,
if
the
sync
controller
cannot
accommodate
general
controllers,
then
it
seems
like
we
should
fix
it
sooner
rather
than
later,
we
have
quite
a
lot
of
stuff
in
the
pipeline.
That
is
of
that
nature.
You
know.
Resource
quota
also
get
scheduled
in
two
different
clusters,
etc.
A
E
You
remember
I
specify
the
last
thing.
If
you
remember
there
had
been
some
discussions
on
this,
where
we
can
try
to
make
whatever
adaptation
or
whatever
additional
interfaces.
Are
there
generic
for
any
other
controller
which
can
be
ported
on
to
sync,
so
that
it
will
proceed
to
the
same
discussion?
Then?
Yes,.
A
E
A
D
Am
available
to
do
reviews
I'm
a
little
bit
cagey
about
what's
going
on
I'm
hoping
we
can
have
more
clarity.
The
face
to
face
it
kind
of
seems
like
a
lot
of
the
stuff
that
we're
discussing
today
is
kind
of
like
full
speed
ahead,
as
if
there
wasn't
any
question
as
to
what
the
future
of
Federation
it's
gonna
be
and
I'm
a
little
bit
less
sure
about.
What's
going
on
so,
okay
Larry.
A
Yeah
I
understand
so
so
I
mean
I've
I've
gone
through
the
Red
Hat
document
in
quite
a
bit
of
detail-
and
you
know
it's
my
opinion
that
that
there
is
not
sufficient
uncertainty
that
we
should
just
put
the
brakes
on
and
not
do
anything
until
that
discussion
is
complete.
Certainly
from
the
from
the
Huawei
side.
A
We've
we're
very
keen
to
continue
to
invest
resources
in
the
current
approach
until
a
better
approach
emerges,
but
but
I
can
tell
you
understand
that
that
not
that
may
not
be
an
opinion
shared
by
all
companies
and
yeah
I
just
wanted
to
get
a
sense
of
so
after
next
week
discussion.
Hopefully,
you
will
be
in
a
position
to
either
say
I'm,
not
investing
anymore
in
this
or
I'm
investing.
You
know
quite
a
bit
more
in
this
said
that
a
good
I.
D
Don't
necessarily
think
a
lot
of
our
customers
are
kind
of
at
the
level
where
they
can
even
consider
you
know.
Global
compute
pool
is
like
an
immediate
near-term
goal
because
they
just
aren't
at
that
level
and
I
kind
of
get
the
your
you
and
Google
and
other
you
know.
Organizations
have
global
applications,
are
kind
of
you
guys
are
just
a
step
beyond
I.
Don't
necessarily
know
that
a
lot
of
other
customers
are
asking
for
for
that
capability,
they're,
more
primitive
in
terms
of
the
requirements
so
yeah.
A
And
I
saw
your
responses
to
my
comments
and
I'd
rather
than
carry
on
backwards
and
forwards
of
comments.
I
thought
it'd
be
better
to
discuss
it
in
next
week's
meeting
and
I.
Wasn't
I
wasn't
going
into
it
with
an
assumption
of
what
people
want.
I
was
just
taking
the
actual
requirements
expressed
by
red
hat
and
trying
to
understand
in
more
detail.
A
There
are
a
couple
of
alternatives
and
one
has
to
be
explicit
about
what
the
what
the
acceptable
behavior
is
yeah,
and
that
was
all
I
was
really
trying
to
tease
out
not
not
mandate
that
the
behavior
has
to
be
X
or
Y,
or
it
has
to
be
a
global,
compute
cluster.
The
requirement
was
that
you
know
this
thing
should
be
able
to
split
a
job
between
two
clusters.
Then
the
answer
is
that
the
question
becomes.
You
know
what
does
that
really
mean
yeah.
D
A
C
When
you
make
a
comment
to
to
just
add
some
additional
information,
like
some
of
the
use,
cases
were
running
across
and
we'll
update
the
docs
and
throw
some
stuff
out
there
is
things
like
jurisdiction
and
policy
to
which
are
also
complex
and
also
I,
think
don't
fit
in
like
a
global,
compute,
pull
context,
mm-hmm
right
and
just
how
to
manage
that
and
an
effective
way
for
customers.
Sorry
I
didn't
quite
follow.
Who
was
speaking
there
so
as
Jesse
go
all.
A
Right,
yeah
I
think
you
know
I
kind
of
press
strongly
upon
everyone
that
if
you
have
things
that
you
think
we
need
to
solve,
and
you
have
concerns
about
the
current
approach
general
approach
to
solving
them.
The
more
you
can
write
these
things
down
and
the
readout
doc
was
great
I
thought
in
very
brief
summary
explaining
what
the
actual
use
case
is
and
what
the
fundamental
requirements
of
the
use
case
are.
A
D
I
would
second
request
just
because
we're
gonna
have
a
limited
amount
of
face
time
available.
I
think
would
be
preferable
if
we
have
the
opportunity
to
educate
ourselves
as
to
what
the
interests
of
the
participants
are
before
we
get
to
the
meeting.
So
we
have
time
to
try
to
find
common
ground
as
opposed
to
just
learning
about
each
other's
respective
goals.
A
Cool
we're
starting
to
run
short
of
time,
so
I'm
gonna
suggest
we
speed
up
a
little
resource
quota
is
another
one
that
I'm
aware:
there's
a
design,
that's
being
reviewed,
I
think
fairly
extensively.
Wagin
mentioned
to
me:
I,
don't
know
if
he's
here
today,
but
he
he's
been
kind
of
swallowed
up
by
some
other
project
work,
so
he's
not
able
to
make
a
contribution
to
that
immediately.
A
E
E
A
1.8
tasks
I
just
kind
of
wanted
to
remind
everyone
that
we
did
go
through
a
prioritization
exercise
and
there
was
a
bunch
of
stuff
that
was
identified
by
various
companies
as
being
things
they
wanted
to
work
on.
I,
don't
think
we
need
to
go
through
that
in
great
detail.
Perhaps
after
next
week's
face-to-face
meeting
would
be
a
more
appropriate
time
to
do
that,
I
think
there
has
been
a
general.
A
You
know,
modular.
The
discussion
have
been
happening
next
week
there
has
been
a
general
consensus
that
moving
the
existing
stuff
from
beta
2
alpha
2
beta
2
GA
is
one
of
the
sort
of
headline
items
and
I
think
a
bunch
of
people
signed
up
to
do
that.
I
was
just
wondering
if
there
is
any
progress
being
made
on
that,
or
should
we
assume
all
of
that
stuff
is
stalled
until
the
end
of
next
week?
Was
that
e
I
don't
even
see
that
on
the
road?
Now
it's
being
on
for
some
number
of
releases
now
well,.
D
A
F
D
D
A
D
I
kind
of
consider
that
to
be
the
whole
I'm
the
fallacy
that
somehow
moving
an
individual
resource
moves,
the
product
closer
to
that
status
and
I.
Don't
I've,
never
really
thought
that
made
sense.
Ooh
we
can.
We
can
have
the
discussion
separately
and
figure
out
what
the
best
high
level
approaches
anyway.
I
guess.
My
point
is
that
I:
don't
really
think
that
there's
an
action
item
that's
been
fleshed
out
that
would
sort
of
provide
a
path
and
I
I
think
that
would
be
the
next
step.
D
A
Agree
and,
and
my
point
was,
has
anyone
made
any
material
progress
towards
that
goal?
I
know
that
some
people
have
normally
had
against
the
names
taking
things
the
product
as
a
whole
in
in
pieces
to
later
and
beyond,
and
my
question
was:
is
there
anything
happening
there
or
is
that
essentially
not
started
yet.
A
From
my
recollection
and
correct
me,
if
anything
else
has
happened,
we
decided
that
there
was
some
issues
with
the
current
CI
testing,
in
particular
the
/
P
our
tests,
in
that
they
weren't
actually
testing
the
code
in
the
PR
in
the
clusters,
and
that
was
that
had
led
to
at
least
a
few
failures.
I
think
you
actually
Maru
brought
up
the
original
issue,
which
then
kind
of
evolved
into
a
different
discussion.
A
A
G
D
G
G
The
Federation
like
the
federated
cluster,
bring
up
to
an
alternative
method,
so
we
discussed
actually
I
discussed
with
Madhu
and
we
are
in
a
general
opinion
like
we
should
move
to
kubernetes
anywhere
and
the
currently
coober
ADM
uses
that
in
that
testing
they
are
using
the
kubernetes
anywhere
to
bring
up
the
cluster.
So
we
should
extend
those
two
may
bring
up
the
multiple
cluster
and
then
the
filtration
upon
that.
G
So
probably,
if
this
all
migrated
to
the
newer
method
of
I
think
Pam
mostly,
we
could
also
solve
the
bringing
up
the
clusters
in
parallel,
so
that
could
bring
down
the
time
of
bringing
up
the
clusters.
So
probably
we
can
move
back
to
the
older
method
like
in
every
PR.
We
could
bring
up
the
cluster
test
it
and
tear
it
down.
So
I
think
that
that
can
be
an
alternative
approach
instead
of
fixing
it
inside
the
coop
map
notice.
It's
right
now.
My.
D
Alternate
proposal
is
that
we
partition
the
e
to
ease
into
things
that
actually
require
nodes
and
things
that
just
require
a
representative
configuration
of
the
API
so
that
we
can
test.
You
know
the
darker
and
darker
set
up
most
of
the
stuff.
I
would
say
that
I
mean
I
would
advocate
for
doing
all
priests
limit
testing
with
darker
and
darker,
because
the
set
up
time
is
minimal
and
the
room
for
error
is
minimal
and
then
we
would
do
post
submit
where
we'd
find
out.
D
If
there
was
some
other
like
there
was
integration
with,
you
know,
tcp
and
grass.
If
that
was
broken,
that's
not
really
the
fault
of
anybody
other
than
like.
It's
only
our
responsibility
to
fix
it.
We
don't
need
to
block
anybody
on
that.
We
just
need
to
make
sure
that
any
change
to
kubernetes
blocks
that
that
breaks,
Federation
is
blocked
and
I.
D
H
H
D
You
can
have
really
fast
setup,
repeatable,
minimal
problems,
so
anytime
you're
deploying
a
kubernetes
cluster,
there's
a
chance
of
transient
test,
intro
failure.
If
you
minimize
that
by
avoiding
having
to
actually
deploy
a
bunch
of
nodes
and
wire
them
together,
your
reliability
is
has
to
go
up
or
just
pure
moving
pieces.
It's.
D
H
Fine,
it's
still
leniently
I,
don't
think
has
been
a
problem
so
far
that
much
if
saying
we
are
being
lucky,
we
could
make
argument
also
that
then
things
have
gone
bad.
We
are
just
lucky
in
testing
in
the
real
environment,
at
least
once
before.
If
we
submit
I
mean
we
are
submitted
is
important,
because
actually
our
problems
early
on
is
much
better
than
fighting
with
a
after.
It
is
much
because
it's
hard
to
people
right.
D
Nobody's
arguing
the
testing
is
a
bad
thing,
but
it's
always
cost-benefit
right.
We
could
run
a
million
tests,
but
it
would
take
a
million
years.
The
issue
is,
we
want
to
minimize
the
impacts
both
on
you
know,
resources
to
fix
broken
tests
on
our
side.
I
need
my
is
the
impact
of
people
who
are
stalled
because
we're
having
to
fix
those
tests.
A
I'm
gonna
interrupt
here
and
suggest
that
the
item
on
the
agenda
was,
we
need
some
improvements.
We
need
some
changes
and
we
actually
need
to
have
a
what
I
think
is
going
to
take
more
than
10
min
discussion
to
come
to
some
agreement
as
to
what
those
things
are.
It
seems
like.
We
have
at
least
some
disagreement
between
the
two
peoples
being
at
the
moment
and
I
have
some
opinions.
Also,
can
I
suggest
that
we
that
we
schedule
a
I'm,
gonna
suggest
one-hour,
meeting
and
I
guess
it
might
even
need
more
than
that?
A
E
B
E
F
A
E
D
A
E
A
G
A
G
F
G
F
F
H
A
A
Yeah
I
guess
they're
they're,
two
things
worth
thinking
about
they're,
the
one
that
the
one
is
when
we
have
leaks
then,
like
the
thing
just
keeps
failing
and
in
other
cases,
there's
normal
organic
growth
in
the
sense
that
the
number
of
tests
increases,
the
number
the
cluster
size
may
increase,
etc,
and
and
sometimes
through
natural
growth.
We
just
need
to
increase
the
quota,
which
is
a
different
and
much
less
severe
problem,
I
guess
figuring
out
which
of
those
two
is
happening.
It
would
be
useful.