►
From YouTube: Kubernetes SIG Federation 20170522
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
There
are
right
now,
if
it's
still
early
enough
phases
and
discussion
enough,
but
I
think
it
will
potentially
change
in
some
respects,
but
it
works
so
that
that
conversion
work
has
been
done.
I
expect
that
when
that
is
done
submitted
doing
the
same
thing
for
the
deployment
controller
should
be
relatively
straightforward,
I
believe
so
far.
The
secret
config
map
and
name
and
set
controllers
have
been
converted
using
controller
I.
Don't
think,
there's
any
expectation
that
all
the
controllers
will
be
converted
over
this
cycle,
especially
the
ones
that
are
more
complicated
and
have
different
logic.
B
So
I
expect
to
be
forgiving.
Grass
and
service
controllers
will
continue
to
remain
as
they
are
through
this
through
one
point:
seven
we
have
to
look
in
more
detail
as
to
what
work
is
necessary
to
make
them
interesting,
but
I
personally
have
not
looked
at
them
enough
to
know
how
possible
it
is
written
to
do.
C
Yeah
I
could
I
could
help
with
that.
This
is
fantastic
news.
Thank
you
very
much
for
the
hard
work
that
you
and
others
have
put
into
making
more
sane.
How
are
instead
of
I
guess,
whatever
you
call
them.
Sinkers
yeah
I
hope
the
top
of
my
head-
and
maybe
you
know,
I,
can
sit
down
and
talk
about.
I
mean
I,
think
the
service
control,
the
stuff
is
probably
the
most
complex
and
and
it's
not
that
complex
and
basically
what
happens
is
when
you
create
and
delete
services
and
align
clusters.
C
We
also
need
to
trigger
a
place
to
DNS
and
then
in
the
direction
when
the
status
of
services
and
headline
clusters
changes,
you
need
to
update
the
aggregated
status
of
the
federated
service,
and
but
neither
of
those
are
super,
complicated,
I
guess.
The
questions
is
where
to
hook
them
into
the
current
sync
logic
and
yeah
be
fantastic.
To
get
to
a
point
where
everything
is
using
the
same
simple
object
and.
D
D
E
That's
true
still,
oh
I
think
it
should
be
relatively
easy.
Now
we
need
to
just
update
the
Federated
service,
which
is
not
there
currently
in
sync
controller,
because
they
get
the
aggregated
status
and
update
the
federated
status.
That's
one
thing
and
another
one
is:
we
should
be
watching
on
two
objects
like
service
and
endpoints.
So
that's
also
probably
not
interesting
controller.
Well,.
B
You
might
want
to
hold
off
until
some
of
the
work
for
replica
sets,
or,
for
instance,
or
for
HPA,
has
gone
in,
because
they
will
be
adding
some
more
logic
to
the
sink
controller.
That
should,
at
least,
if
not
necessarily
be
exactly
what
you
need
be
more
like
what
you
need
and
give
a
away
like
a
pattern
forward,
because
right
now
the
sink
controller
is
it
exists,
will
only
really
work
well
for
basic
objects
that
don't
have
any
any
complicated
same
collage
Accord
status.
Recording,
but
once
replica
sets
are
being
sinks.
B
C
A
H
H
I
I
A
How
I
said
design
that's
what
one
thing
is
about?
Was
it
last
week
and
we
had
instructions
and
suggestions
and
I
know,
there's
an
implementation,
that's
in
parallel
right
now,
but
I'm
not
sure
the
design
document
reflects
entirely
what
we
have
discussed
and
what's
in
the
code,
each
other's
so.
H
A
H
H
I
Yep
about
I
would
want
the
XP
a
controller
also
to
be
added
to
this
list
at
least
first
phase
implementation.
I
know
there
were
some
comments
on
the
design
and
we
probably
would
need
some
updates,
but
I
am
sort
of
ready
with
the
first
cut
implementation
of
this
thing.
So
I'll
anyways
there's
couple
of
beers
today.
So
hopefully
it
may
not
go
to
do
account
of
reviews
in
the
next
eight
days,
so
we
should
be
able
to
merge
that.
I
Last
week
we
had
yeah
the
status
of
the
implementation
is
in
sync
with
the
design
that
is,
that
was
reviewed.
Last
week.
Okay,
there
was
a
rotational
things
which,
which
probably
were
not
clear
in
that
design
document
and
I,
had
to
update
I,
haven't
updated
that,
but
I
will
surely
update
that
within
two
days,
so
that
is
the
status
of
the
design.
I
The
status
of
implementation
is
quite
in
sync,
with
what
we
discussed
last
time,
except
for
one
pointer
in
that,
which
is
the
other
controllers,
also
should
have
some
option
of
switching
off
sort
of
switching
off
for
rebalancing
or
reconciling
when
HPA
is
on,
so
that
I
am
not
sure
on
what
I
should
do.
The
PR
means,
because
the
replica
set
is
in
I
mean
not
completely
either
move
to
sing
controller
or
it's
not
old.
I
H
I
My
my
intention
is
to
implement
one
more
flags
which
is
an
optional
one
in
the
annotations,
which
is
updated
not
by
the
user
but
updated
by
the
controller
HBA
controller,
something
like
reconcile
all
or
that
kind
of
stuff,
so
the
controllers
means
there
has
to
be
some
communication
between
the
two
controllers,
for
example
HPA,
and
the
replica
set
controller.
So
if
there
is
an
HPA
which
exists
in
a
particular
cluster
for
that
cluster,
this
sorry,
not
for
that
cluster.
For
that
particular
object,
target
object.
I
D
H
D
K
E
D
D
Know
his
initial
P
I
went
in
and
was
asking
about:
should
we
so
he
added
his
code
to
same
controller
and
he
had
a
question
regarding
what
should
we
do
for
other
controllers
and
especially
the
ploidy
once
across
service
particular?
So
it
seems
fine
to
me
to
add
this
logic
to
service
controller
as
well
widely,
because
we
know
we
won't
move
so
this
controller
to
sing
in
one
direction.
D
C
K
D
E
B
E
L
Yeah,
so
the
initial
PR
is
up
Mikkel.
Whenever
you
have
a
chance
to
look
at
it,
yeah
feedback
is,
it
was
wanted.
There
are
two
things
that
are
not
included
in
there.
I
guess
right
now.
One
is
like
intend
testing,
so
the
I
don't
know
if
we
need
to
do
that.
For
this,
it's
just
an
admission
controller
yeah.
If
I
could
get
some
guidance
on
that.
That
would
be
great
and
the
other
thing
I
guess
is
documentation.
L
L
Yeah
I'm
not
sure,
but
it
was
your
definition
of
integration
at
this
but
yeah.
Maybe
after
you
look
at
the
tests
in
the
PR,
you
can
give
me
a
pointer
if
I
need
to
add
more
I.
Think
it's
fairly
well
covered
right.
Now:
I
mock
up
all
of
the
engine
in
the
admission
control
tests,
so,
like
all
the
all
the
failure,
its
kids
they're
handled
right
now,
yeah.
Let
me
know
by.
D
L
I
D
Yeah
far
out,
so
we
get
a
discussion.
I
did
a
session
with
Marui
last
time
as
well.
So
what
we
are
trying
to
do
right
now
is
that
the
credentials
that
Federation
uses
it
is
used,
it
has
access
to
all
namespaces
and
there
is
no
way
to
restrict
it
to
some
set
of
in
spaces
and
I'm,
trying
to
work
on
a
PA
to
fix
that,
so
that
admins
can
restrict
regulation
to
just
with
a
goodnight.
C
K
C
K
Not
precluding
that,
but
I
mean
if
you're
talking
about
needing
you
know,
per
user
update
tracking
are
like
per
user
per
action.
Update
tracking
I
haven't
talked
to
anybody
on
the
oxide
who
know
more
about
this
than
any.
But
you
could
say
that's
a
good
idea.
We
should
do
that
quite
the
opposite
and
I'm
curious
to
hear
you
know
if
you've
talked
to
other
people.
That
said
that
it
was
feasible
or
reasonable
for
Federation
to
tackle
on
its
own
I.
C
C
The
bigger
concern
is,
if,
if
you
have
credential
sorry
are
back
on
underlying
clusters
that
grant
or
revoke
permissions
to
do
various
things
to
objects,
and
my
understanding
of
our
back
is
that
you
do
have
more
control
link
and
in
space
you
have
per
object
like
yeah,
maybe
this
position
and
if
that
is
potentially
not
being
enforced
because
the
actions
are
being
performed
as
different
user.
This
gives
a
given
user
ability
to
do
things.
C
K
K
C
C
I
mean
I
am
comfortable
with
the
idea
that
a
feature
that
we
could
provide
to
get
our
back
rules
from
Federation
down
into
the
clusters
for
those
users
who
wanted
consistent
our
back
rules
in
all
of
their
clusters
or
in
subsets
of
their
clusters.
But
that's
not
the
only
use
case.
There's
another
use
case
where
each
cluster
is
is
administered
independently
by
different
people
and
have
inconsistent
our
back
commissions.
So
we're.
K
D
I'd
like
to
just
jump
in
and
say
the
original
idea
that
advair
Federation
uses
those
users
credentials
to
send
those
requests.
Yet
that
I
agree
has
like
a
lot
of
cultures
and
stuff,
but
there
was
another
idea
which
was
discussed,
but
maybe
I
can
have
like
an
admission
controller,
which
does
the
authorization
check
in
the
underlined
cluster.
So
when
Federation
gets
a
request,
it
does
the
authorization
check
in
Federation
control
plane,
but
then
it
also
does
the
same
authorization
check
in
the
underlined
clusters
as
well
like.
D
If
you
want
to
create
a
deployment
in
all
clusters,
it
verifies
that
does
this
user
has
authorization
to
create
the
deployment
in
all
the
underlying
clusters,
but
if
it
finally
sends
it
as
Federation
service
account,
but
it
doesn't
authorization
check
earlier.
This
can
be
an
optional
admission
controller
that
admins
can
enable
noticeable,
isn't.
I
K
Seems
like
a
whole
that
security
people
are
not
going
to
like,
or
is
it
just
that
you're
just
going
to
you're
going
to
do
an
initial
check,
you're
going
to
accept
it
and
then
you're
going
to
fail?
Where
are
you
going
to
perform
the
check
of
propagation
time
to,
or
is
it
only
an
admission
time?
It's.
K
D
Similar
to
the
various
other
authorization
checks,
we
do
it
like.
We
do
them,
do
the
authorization
tech
at
D.
Then
we
are
accepting
the
request
and
once
it's
accepted,
we
ensure
that
it
will
work
like
similar
to
which
are
creating
a
deployment
we'll
check
that
you
have
authorization
when
we
accepted
the
deployment
and
then
the
deployment
controller
will
create
replicas
set
or
pods
on
your
behalf,
maybe
like
little.
Even
your
access
has
been
revoked
since
your
request
was
accepted,
and
at
that
time
you
had
the
access
the
requested
work,
ripe.
K
It
sucks
I,
don't
know
that
seems
a
little
bit
different
to
me,
because
the
initiation
of
like
creating
the
deployment
is
one
thing
and,
as
you
say,
it's
like
once
it
takes
out
to
the
controller
we'll
treat
it
as
if
you
know
you're
authorized,
but
I
mean
in
this
case
you
have
the
underlying
cluster,
with
authorization,
capabilities
and
you're,
just
ignoring
them
or
bypassing
them.
So.
C
It
sounds
like
the
other
implementation
that
they
have
is
just
broken
as
well
and
the
question:
do
we
just
build
another
broken
one
I
mean
clearly
if
I
don't
have
permission
to
create
pods
in
a
cluster,
and
you
know
that
permission
gets
revoked
and
the
system
is
still
creating
pods.
For
me,
that
seems
just
broken,
but
and
yes,
we
could
build
an
equivalently
broken
one
I
agree
Michaela,
it's
probably
consistent
with
what
we
have
I
guess.
The
questions
is
what
we
had.
What
we
want
or
what
the
Commission
people
want
is.
D
C
Yeah
I,
don't
know
the
answer
of
the
top
of
my
head
to
that
question
seems
like
a
difficult
one
to
answer.
Well,
I
can
test.
Thirdly,
see
use
cases
where
an
employee
leaves
and
they
get
all
the
Commission's
work,
but
you
don't
necessarily
want
it.
The
whole
system
to
fall
down
on
its
face,
but
optional.
A
A
So
one
of
the
other
discussions
we
had
was
killed.
Correct
me
if
I'm
mistaken
was-
and
this
was
attracted
to
the
off
focus-
to
try
to
contain
or
release
associate
users
to
particular
namespaces
and
associate
with
whatever
our
back
rolls
through
the
federated
clusters.
That
way
it's
easier
to
revoke
workloads,
because
they're
tied
to
particular
namespaces.
D
Yes,
it
is
similar
to
the
first
point
by
Dorota
that
the
credentials
like
Federation
gates
I'll,
restrict
it
to
some
subset
of
mistresses
that
you
want
to
federate.
Let's
say
two
namespaces
across
all
your
clusters
say:
give
Federation
access
to
only
those
two
namespaces,
and
then
you
can
revoke
such
name
spaces
which
Federation
has
access
to,
and
then
you
limit
Federation's
access
to
only
some
subset
of
namespaces.
C
C
B
D
C
M
N
Hello,
my
honorable
home,
okay,
okay,
some
problem
is
Mike,
I
guess
so.
What's
the
hot
topic
in
23
subnets
was
made
required
on
on
Wednesday
or
Tuesday
I
think,
but
it's
somehow
started
breaking
on
Friday.
We
made
a
mistake.
There
we
moved
a
job
which
was
running
on
Jenkins
to
probe
and
then,
when
it
broke,
we
figured
out
that
we
did
not
know
how
to
use
probe
to
manually
kick-start
runs.
N
So,
instead
of
blocking,
we
finally
clear
it
out
on
Friday
late
evening,
Jonathan
and
I
were
debugging
until
like
6:30
on
Friday,
but
we
figured
that
instead
of
locking
the
whole
PR
q
q
/
subject
to
pursue
Burnett
is
we
would
rather
make
the
presubmit
not
required
for
the
weekend
and
then
we
hope
we
not
what
was
going
on
on
Monday.
So
that's
the
current
status
I
know
we
start.
We
started
big
computation
this
week.
So
in
the
future
this
is
going
to
be
a
responsibility
of
the
build
cop.
N
N
Not
sure
we
need
to
debug
this,
we
don't
know
why
we
started
seeing
the
failures
on
Fridays
and
the
second
problem
was.
We
did
not
know
how
to
make
it
started
on.
So
we
know
how
to
do
the
second
thing,
but
this
I
don't
know
what
caused
the
failures.
It
did
see.
It's
an
infrastructure
failure,
but
I,
don't
know
what
like.
We
did
not
understand
what
caused
those
failures.
H
N
N
C
Just
one
suggestion
and
before
we
hand
the
responsibility
for
these
blocking
tests
over
to
the
build
come
I
would
suggest
that
we
at
least
you
know,
verify
that
it
that
the
infrastructure
part
of
it
works
correctly,
that
the
people
who
implemented
it
do
that
and
then
hand
it
over.
Let's
say
after
2
weeks
or
something
to
the
build
Cup
and
in
the
interim
you
know
as
soon
as
you
diagnose
each
other
there's
a
PR,
that's
buggy
or
whatever
it's
causing
things
to
fail.
N
Are
two
things
here
because
all
are
basically
hugs
people
who
into
at
least
I
am
I,
am
one
of
like
this
bilko
we
are
talking
about.
Is
the
Federation
bill
competition?
So
if
people
who
work
on
Federation
are
the
people
who
are
on
this
breakup
rotation,
so
technically
it's
the
same
people
at
least
I'm
on
the
rotation.
N
The
other
thing
is
I
I
agree
with
you
that
we
need
to
verify
this,
but
there
is
no
way
we
are
going
to
verify
all
the
scenarios
and
unexpected
things
are
going
to
happen,
for
example,
the
way
it
happened
this
week
so
I
think
the
right
thing
to
do
here
is
to
document
what
to
do
when
things
made
or
where
to
go.
Look
what
to
debug,
which
is
what
I
intend
to
do.
N
C
Again,
I
mean
I,
understand
what
you're,
saying
and
I
repeat
what
I
say
is
I
think
you
personally
or
whoever
the
people
are,
that
have
put
this
in
place
to
be
responsible
for
getting
it
stable
before
it
gets
handed
over
to
the
build
cop
on
call,
which
is
this
week.
Nikhil
I
mean
in
theory.
The
whole
group
is
the
same
group
of
people,
but
there
are
actually
different
people.
So
what
I'm,
making
the
distinction
between
here
is
Nikhil
having
to
fix
it
as
opposed
to
you
had
to
fix
it,
I
mean.
C
N
C
N
C
B
E
N
C
F
Running
it
so
DM
hi
everybody
its
Colin,
so
the
AWS
one
is
up
in
a
PR
right
now.
We've
just
been
working
to
review
for
the
last
couple
weeks,
but
I
do
hope.
We
get
that
merge
before
1.7
the
results.
Last
time
I
checked
looked
okay,
we
had
probably
40
passes
and
hate
failures
for
the
Federation
focus.
So
there's
a
little
bit
of
work
to
do
to
get
that
entire
grid
green
as
well.
N
C
We
should
we
should
definitely
put
that
near
the
top
of
our
list
for
1-8,
if
you're
not
going
to
get
into
1:7,
because
lots
and
lots
of
users
are
using
Federation
on
gke
and
we
don't
even
know
if
it
works
there
I
mean
we
do
know
from
battle
test,
but
as
soon
as
it
breaks,
we
don't
know,
that's
I
would
say.
The
majority
use
case
for
Federation
is
cheeky.
J
So
this
is
my
first
time
at
the
sagen
I'm
sort
of
acting
as
the
liaison
right
now
between
Azure
container
service
and
the
cigs.
So
if
there's
something
I
can
do
to
help
push
that
forward,
anybody
want
to.
Let
me
know
what
that
might
be.
That
would
be
super
helpful
and
I'm
on
the
community
slack
as
JD
Mars.
C
F
I
will
also
say
that
in
the
PR
for
the
AWS
Federation
ete
test,
I
did
a
little
bit
of
work.
Hooking
cube,
fed
directly
up
with
cube
test,
which
makes
paralyzing
bring
up
a
lot
easier.
Among
other
things,
you
may
just
want
to
take
a
look
at
that
before
you
get
started
to
measure.
J
C
J
C
J
That's
very
wise
I've
had
that
a
question
myself:
is
it
I'm
new
to
the
orc,
because
I
was
with
a
us
and
we
got
acquired
by
Microsoft,
so
we're
still
finding
our
way
through
the
organization
which
so
far
is
amazing.
Microsoft
is
really
sincerely
committed
to
open
source,
which
is
kind
of
blowing
my
mind
in
a
really
good
way.
So
so
that's
really
good
news
for
the
communities
community
as
well.
Mm-Hm.
H
I
Yeah
hi
yeah,
it
seems
difficult,
I
mean
because
all
these
powerpoints
are
this
and
there's
another
eight
ten
days
for
the
odd
freeze,
so
I
think
we
can
do
the
review
now.
Also
I,
in
fact,
was
also
thinking
that
we
will
do
that,
but
because
other
options,
other
points
are
being
discussed.
I,
let
it
go.
We
can
do
it
next
week
if
there
is
not
no
other
more
important
point.
408
107
now.
C
I
C
Yeah
I
mean
that
was
a
pretty
headline.
I
said
awesome,
seven.
That
would
really
be
sad
if
we
miss
it
because
of
your
accredit
process
of
the
first
two
necessity
is
it:
is
there
any
way
we
can
rescue
it
by
reprioritizing
me,
for
example,
we
could
potentially
turn
holder
today
or
tomorrow
and
I
I
reviewed
it
quite
a
while
back
and
I
thought
we
resolved
all
the
hard
problems
and
and
how,
assuming
that
you
got
to
go
it
today,
to
go
ahead
with
the
design
as
it
stands
today.
I
C
A
I
can
sure
so.
Quick
name
is
the
last
two
meanings,
but
we
covered
for
design
reviews
of
the
last
two
meetings.
That's
worked
reasonably
well,
where
you
know
the
Friday,
which
would
be
two
days
before
three
days
million
kids.
Before
presenting
the
review,
people
can
read
up
and
be
ready
for
the
ensuing
discussion.
Your
opponent
presented
Jia
Hui,
Nikhil
and
Eric
vada
was
also
here
from
the
testing
group.
I
think
that
model
reflects
where
we
want
to
be
in
terms
of
having
everybody
on
the
city.
C
B
If
it
were
right
now,
the
pr
would
be
out
for
review
in
a
week
which
is
the
29th
of
May,
which
leaves
no
the
30th
of
May,
because
29th
is
Memorial,
Day
next
Monday
dus
holiday,
so
that
leaves
maybe
like
two
or
three
two
days
before
they
code
freeze
to
actually
review
and
merge
the
pr,
because
the
code
freeze
is
june.
1St
right.
This
is
in
1st.
So
that
seems
like
a
very
short
timeframe.
If
the
PR
were
out
today,
it
would
seem
positive,
seem
possible.
C
K
C
K
K
C
I
Yep
I
have
done
some
basic
implementation
of
the
same
where
the
stateful
sets
could
be
created
in
underlying
clusters
that
kind
of
stuff,
but
there
are
additional
things
like
we
have
to
additionally
create
services
as
per
the
current
design.
Yes,
we
have
traditionally
create
services,
nothing.
Those
stateful
sets
and
the
more
important
and
complex
portion
is
the
DNS
record
with
the.
C
I
Well
into
the
current
state
controller,
yes,
so,
actually,
last
two
weeks
I
have
been
trying
to
get
a
common
ground
down
that
rebasing.
The
complex
controllers
on
the
sync
controller
so
and
because
it
was
not
reaching
anywhere
I
did
not
proceed
to
that
other
work.
Yeah.
It
have
been
trying
to
proceed
the
expiate
controller,
somehow,
on
top
of
same
controller,
yeah.
C
D
Lot
of
customers
have
been
asking
for
it,
but
at
least
what
I
I've
been.
The
need,
in
my
mind,
is
like
this
a
lot
of
work
to
do
for
stateless
workloads
as
well
just
to
ensure
a
work.
And
yes,
we
do
want
to
support
state
fold
as
well,
but
yeah
first
make
sure
at
least
hitless
works.
And
then,
while
we
review
the
design
and
implement
state
solution,
because
ok.