►
From YouTube: Kubernetes SIG Multicluster 20180213
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
A
B
A
Yep
about
the
document
that
Christian
has
published
and
is
was
expected
to
complete
the
even
documentation
for
hydration,
so
I
have
some
updates
about
that.
So
these
are
the
private
put
in
a
tender
today
that
we
might
want
to
that's
what
I
was
trying
to
get
from
the
Karabakh,
something
else
we
know.
Oh,
then
we
can
talk
about
that.
A
Okay,
yeah
so
watch
me.
We
did
hit
some
problems
with
the
CI
listen,
so
one
update
like
which
was
not
very
obvious
people
talk
I
have
been
convinced
about.
That
was
that
we
are
doing
some
effectively
in
that
conditions,
are
folks
trying
to
fix
some
issues
and
the
whatever
queries
are
coming
in
here.
Trying
to
respond
to
this
queries
and
one
talking
point
about
availability
of
this
release.
A
But
one
thing
which
is
pending
is
that
a
bit
of
the
release,
notes
and
the
release
location
being
as
part
of
the
reform
from
updating
the
documentation
pointing
to
the
pinnacle
location
in
the
some
portion
of
that
was
also
delayed
because
of
we
hit
some
CIA
problems
and
one
of
the
problems
that
we
did
hit
pause
after
the
update
in
k
tests,
one
of
the
latest
updates
in
theaters.
So
last
couple
of
months.
What
has
happened
is
that
so
the
theaters,
the
apps
API,
has
been
moved
into
apps
and
they
have
been
gradually
sobota
from.
A
We
haven't
read
for
money
to
wheeler
and
last
week,
and
sometime
around
30
or
five
days,
and
data
also
moves
the
storage
of
that
API
to
the
same
as
absolument,
and
what
we
have
been
using
here
is
for
ages
extensions,
even
because,
for
some
reason,
is
defaulting
between
both
these
versions.
The
difference,
ideally
I
mean
I'm,
not
very
sure
about
this
particular
stuff.
A
So
we
what
we
have
done
right
now
is
we
do
not
have
the
apps,
even
as
part
of
the
belted
code
that
we
have,
but
the
same
defaulting
logic
was
available
as
part
of
apps.
We
burn
in
beta
1,
so
that
is
what
we
have
used
for.
Storage
go,
get
one
make
API,
which
solves
the
problem
for
now
the
next
step.
What
we
are
doing
is
that
we
are
updating
the
vendors
Pol
oscillator
1.9.3,
to
give
them
that
PR
is
also
sort
of
complete,
so
we
will
be
merging
the
tracklist
and.
A
A
So
these
are
the
updates
about
the
ongoing
both
I
squeezed
it
last
movie.
Apart
from
that,
what
we
have
also
done
is:
we
have
basically
try
to
label
issues
in
correct
mechanics
are
most
of
it
was
done
already
thanks
to
the
internet
all
day,
long
and
I
always
echo.
What
we
have
additionally
done
is
that
some
issues
which
are
simple
to
do,
but
it's
just
like
they
have
been
standing
there-
lack
of
resources
for
people
there
to
work
on
them.
We
have
labeled
animals,
help
on
stairs
and
a
few
kind
of
things
also.
A
So
if
you
go
to
position
the
four
and
just
search
for
these
two
labels,
you
would
have
at
least
penniless
issues
that
can
be
but
candy
I'm,
seeing
the
speed
of
the
post
and
try
to
pick
up
an
extra
height
that
you
want.
There
are
some
many
other
issues,
especially
relevant
with
Quebec,
but
usability
and
their
family
are
some
big
things
which
were
needed
to
be
done
before
we
started
moving
out
of
the
photo.
There
are
most
50
of
them
and
because
the
method
is
a
tool
and
the
functionality
might
be
easy,
I'm.
B
So
sorry,
to
interrupt
I'm
I'm
wondering
whether
we
shouldn't
I
have
to
deal
with
that
in
the
sort
of
Federation
specific
part
of
the
meeting,
maybe
at
the
end,
because
I'm
not
sure
everybody
here
needs
all
the
detail
of
Federation
specific
stuff.
So
maybe
we
should
deal
with
the
general
multi
cluster
updates
first
and
then
we
can,
you
know,
whoever's
not
interested
in
their
film
okay.
A
C
C
If
you
have
mean,
if
you
go
to
kubernetes
documentation,
high
level
top-level
documentation
and
you
you
I
mean,
as
a
user
you'd
think
that
you
know
Federation
is
active
well
and
being
worked
on
at
least
the
v1,
and
that's
currently
not
the
state
of
things.
Many
people
within
the
siga
working
on
different
things,
so
I,
don't
know
how
to
even
once
we
put
that
document
into
markdown
and
it's
in
the
repo.
Maybe
we
link
to
it
in
the
topic
of
the
slack
Channel.
C
I'd
argue
that
as
a
second
step-
and
this
doesn't
necessarily
need
to
be
done
by
me-
we
should
probably
look
at
what
we
want
to
do
with
that
top-level
kubernetes
documentation.
A
lot
of
the
workloads
api's
have
gone
GA
a
lot
of
what
I
would
call
single
cluster
kubernetes
has
gone
GA
and
that's
reflected
in
the
documentation.
But
it's
somewhat
unfortunate
that,
alongside
that
same
documentation,
there
is
how
to
federate
your
cluster
and
right
now,
we're
not
really
staffing.
D
B
E
So
I
have
played
with
Federation
a
few
months
ago
and
fine
I
tried
several
ways
to
set
up
a
simple
federated
cluster
I
could
set
up
a
cluster,
deploying
an
app
was
quite
painful
and,
as
a
matter
of
fact,
I.
Finally,
almost
three
or
four
bugs
no
for
that
and
I
never
got
it
working.
My
team
led
the
kubernetes
on
AWS
workshop,
which
is
on
AWS
samples,
repo.
E
We
were
never
able
to
showcase
this
to
any
of
our
customers,
so
essentially
I
think
fair
understanding
of
and
again
I'm
joining
this
group
for
the
very
first
time
I
am
a
principal
technologist
at
Amazon,
focusing
on
open
source
and
containers,
so
I
think
any
understanding
in
terms
of
what
is
sort
of
the
party
line
in
terms
of
Federation
versus
multi
cluster
would
be
very
helpful
because
our
customers
are
constantly
asking
us
about
that
guidance
and
the
documentation
seems
to
be
confusing,
as
we
were
just
talking
about
it.
So.
C
E
E
I'll
join
the
Google
group
as
well,
so
I
will
definitely
go
through
the
document
and
I'll
also
share
who
the
Federation
sort
of
workshops
or
to
say
that
we
created,
but
we
were
never
able
to
deploy
it
so
I'm
willing
to
give
it
another
shot,
because
that
is
a
question
that
our
customers
often
ask,
and
this
could
be
bring
your
own
kubernetes
cluster
I've
been
trying
this
with
eks.
Well,
DK.
E
Noise
is
a
different
level
altogether,
but
it's
still
restricted
clustered
at
this
point
of
time,
but
as
things
evolve,
those
are
the
kind
of
things
that
I
would
like
to
try
with
eks
and
see.
If
I
can
set
up
a
federated
cluster
run
within
e
KS
team.
As
we
talk
about
it,
we
would
like
to
kind
of
build
our
own
understanding.
What
is
it
that
we
should
recommend
to
our
cluster?
Our
customers
should
be
a
multi
cluster
or
a
Federation
approach.
B
E
I
just
drop
a
link
to
the
Jester
Federation
workshop
that
we
created
and
for
now
it
is
actually
a
work-in-progress
module
section,
because
we
have
never
been
able
to
get
it
working
reliably,
or
at
least
nobody
ever
actually
got
it
working
in
terms
of
Federation,
where
I
could
deploy
using
cube
card
or
that
the
product
of
this
federated
cluster
and
now
my
pods
are
deployed
across
multiple
clusters.
So
once
and
I'm
happy
to
tag
along
with
anybody,
you
know
who
can
help
me
get
this
working
or
figure
out.
E
Stuff
that
you
are
talking
about,
okay,
sounds
good,
perfect,
I.
Think
that'll.
Work
really
well,
because
all
I
would
need
is
maybe
like
an
office
hour
for
half
an
hour
to
an
hour
where,
if
we
can
get
this
working
because
it
really
helps
with
the
visibility
of
the
effort
of
this
group
and
I'm
happy
to
help
with
that,
because
we
gave
this
workshop
three
times
at
reinvent.
We
give
this
workshop
at
cube.
Con
and
again
people
were
asking.
Oh,
why
this
section
is
not
working,
and
now
we
are
giving
this
again
at
cube
con
Copenhagen.
A
D
E
I
mean
our
customers
are
looking
to
do
the
Federation
primarily
because
they
could
set
up
a
cluster
federated
cluster
between
East
and
West,
and
now
they
can
say.
Okay
now
go
ahead.
Now
you
can
create
stickiness
and
then
I
can
say
in
okay,
create
my
parts
and
distributed
between
these
two
different
federated
clusters.
E
So,
instead
of
doing
that,
what
we
have
been
recommending
them
is
that
okay
set
up
your
separate
clusters,
set
up
your
Jenkins
or
your
code
pipeline
in
front
of
egg
and
then
do
manual
Federation's
so
to
say
and
I'm
saying,
Federation
quote-unquote,
because
essentially,
as
your
gate
commits
happens,
instead
of
doing
your
pipeline
doing
you
know
the
cube
cuddle
doing
the
Federation.
Your
pipeline
is
now
doing
the
Federation
and
deploying
the
cross
multiple
clusters.
E
I
think
the
joy
is,
you
know,
because
the
feature
exists
sometimes
customer
just
want
to
use
it.
You
know
and
if
a
feature
exists
and
I
think
we
should
either
look
at
supporting
it
fully
or
we
should
say
that
this
feature
is
not
fully
baked
in
or
this
feature
is
not
working.
You
know,
I,
think
that
clear
guidance,
independent
of
what
our
customers
are
doing
would
be
still
very
much
relevant
and
I
am
I.
C
E
I
agree
because
you
know
not
even
once
and
I
deployed
the
federated
cluster
I
try
to
deploy
deeper,
federated
cluster
a
few
times
and
almost
five
or
six
times
actually
and
every
time
the
amount
of
time
for
the
Federation
to
come
up
was
quite
a
while.
It
could
range
anywhere
from
like
10
minutes
to
almost
up
to
40
minutes
and
and
certain
times.
I
waited
five
six
hours
and
the
Federation
wouldn't
happen
actually
so
to
me
is
unreliable
and
does
not
work,
and
it's
not
predictable,
so
I'm
afraid
to
call
it
GA.
C
We
want
to
show
people
how
to
turn
it
on,
but
once
we
try
to
go
to
production
with
solutions
at
Google
and
we
start
working
with
you
know:
users
and
customers,
and
even
just
ourselves
it
became
really
really
important
to
understand
what
problem
people
wanted
to
solve
because
sometimes
just
having
you
know,
a
bunch
of
checked-in
configs
took
care
of
a
lot
of
their
a
lot
of
their
problems
and
was
a
very
good
scalable.
Well
understood
solution.
C
Some
the
customers
really
didn't
need
to
go
the
full
federation
route,
because
it's
a
lot
of
it's
a
very
powerful
tool,
but
it's
also
one
that
requires
a
lot
of
you
know
ramp
up
and
understanding.
So
we
we
keep
hitting
home.
What
problem
do
you
want
to
solve
before
you
know
bringing
in
solutions
and
reproduction?
Of
course,.
E
I
wasn't
trying
to
go
with
that
golden
hammer
syndrome.
Here,
no
I
was
just
trying
to
see
that
if
this
works
I
talk
to
my
customers,
because
in
the
early
days
when
I
was
trying,
it
I
was
excited
about
it.
But
then,
when
I
realized,
this
thing
is
not
even
working
so
I
pretty
much
end
up
talking
about
it
to
any
of
our
customers
and
then
the
more
I'm
talking
to
customers
and
developers
and
even
in
Seagate
WSS.
G
E
B
Think
on
the
agenda,
there
is
nothing
else.
So
if,
if
anyone
is
interested
in
remaining
in
the
room
for
Federation
specific
topics,
I
think
if
one's
got
a
bunch
of
outstanding
issues
that
are
currently
worth
working
on.
If
anyone
wants
to
chip
in
there
and
then
we
can
talk
about
those
for
the
next
30
minutes.
Seeing
as
we
have
that
much
time.
A
A
Yes,
a
little
more
update
to
you
aren't
the
see
what
what
happened
in
the
ass
in
the
last
quarter
of
of
previous
years,
that
we
decide
to
move
the
suggestion
out
of
prep,
oh,
and
that
is
where
there
was
some
amount
of
flux,
and
there
was
a
fine
that
there
was
no
fixed
location
for
the
addition
relief.
Finally,
almost
three
months
of
it
and-
and
that
is
also
one
of
the
main
reasons-
a
large
confusion
is
aiming
for
the
users.
A
Currently,
the
status
of
the
documentation,
which
is
available
at
gate,
SiO
documentation
is
that
documentation
is
a
little
old,
but
the
mechanism
to
penetrate
and
how
to
use
more
or
less
remains
change.
The
only
successfully
traces
that
the
documentation
points
to
some
to
these
location
and
the
binary
that
might
not
be
found
as
I
was
updating
Hanako.
That
problem
has
obviously
solved
and
the
binary
locations
are
fixed
and
in
the
band
is
actually
images.
I
have
to
basically
update
their
documentation.
Now,
apart
from
that,
Christian
would
also-
or
any
of
us
can
probably
update.
A
E
Okay,
thanks
for
the
update,
so
I
will
reach
out
to
you
on
slack
unless
I
mean
I.
Would
I
would
really
like
to
write
personally
like
the
concept
of
Federation
and
as
we
talk
to
customers,
we
are
trying
to
ask
them:
do
they
really
need
Federation
or
do
they
need
multi
flustered?
So
that's
sort
of
an
update
that
we
are
getting
from
them
on
a
regular
basis,
but
once
I
understand
it,
then
we
can
start
talking
to
them
that
okay,
this
is
actually
possible
and
this
is
a
reality.
E
B
You
know
a
lot
of
cycles
burned
up,
moving
things
between
clusters,
at
least
between
source
code
repositories
and
discussing
what
the
various
API
options
were
and
whether
we
needed
the
simpler
I
guess,
workflow
based
approach
that
you
mentioned
earlier,
but
I
think
for
the
most
part,
those
those
things
are
resolved
themselves
now
and
and
work
will
continue
to
build
a
stable,
GA
Federation,
as
well
as
stable,
GA
alternatives
to
Federation,
which
will
allow
you
to
do
the
kinds
of
things
that
you
mentioned
earlier
got
it.
Thank
you.
A
Okay,
yeah
now
well,
I
had
one
more
pointer
here.
I
was
talking
about
one
issue
that
we
faced
recently
because
of
an
update
in
how
of
what
version
it
is
used
to
store
app
API
returns.
8.
It's
a
statistic.
Sorry
caters
variously,
so
this
question
I
know
we
have
talked
about
this
earlier
and
rough
conclusion.
I
remember
is
that
it
doesn't
matter
which
particular
version
we
basically
use
to
talk
to
the
data
structure,
the
one
that
we
expose,
the
one
that
we
expose
from
the
Federation
control
plane
or
the
tradition.
A
B
Leadeth
me
well,
so
I
don't
want
that
kind
of
draggers
are
too
long
but
I.
Guess
if
we're
going
to
change
what
we
do,
we
may
as
well
make
sure
that
it
makes
the
most
sense
we
can
so
right
now
we
have
a
bunch
of
GA
things:
GA
API
objects,
GA
from
the
point
of
view
of
kubernetes
exposed
by
a
federation,
so
replica
sets
secrets,
conflict
maps,
etc.
All
GA
I
believe
and
they're
exposed
as
GA
in
Federation,
but
the
control
plane
may
not
be
GA
for
those
API
objects.
B
Not
create
unnecessary
churn
by
forcing
various
pieces
of
the
system
to
be
renamed
or
manually
overridden
in
terms
of
the
API
versions,
to
convey
something
which
actually
doesn't
relate
to
the
API
version.
So
much
as
to
the
implementation
of
the
Federation
controller
for
that
API.
First,
there
was
a
long
story,
but
does
that
make
sense?
Yeah.
A
So
what
I
understand
is
that
one
mechanism
to
do
that
is
we
can
leave.
For
example,
config
Maps
is
exposed
to
a
Beaman
API.
We
can
use
it
the
same
way.
It
is
right
now
and
make
it
explicit
in
the
documentation
which
is
not
forgiving
API,
but
the
controller
for
this.
It
is
defending
the
kubernetes,
even
at
the.
B
B
B
D
B
I
mean
we
actually
have
the
same
problem
with
kubernetes
right
now,
so
I
mean
you
can
have
a
node
API,
which
has
been
GA
for,
however
long,
but
under
the
hood
deserves
scheduler
and
you
can
plug
in
whatever
scheduler
you
like,
and
some
of
them
may
be
GA
and
others
may
not.
So
the
API
doesn't
actually
give
you
any
indication
of
the
stability
of
the
underlying
implementations.
D
You
know
that's
not
so
fair
point,
I,
guess
the.
What
that's
reminding
me
of
is
that
the
current
mode
and
Kubica
system
of
defining
an
API
version
on
your
AP,
like
on
your
rest
interface,
and
really
that
that
captures
two
things
and
maybe
both
of
them
not
very
well.
One
is
like,
as
you
say,
the
API
and
weather
we
subject
to
future
change,
but
there's
also
the
backing
implementation.
Maybe
it
only
really
refers
to
the
default
implementation,
but
like
something
like
like
replica
sets
is
like?
Oh,
that's,
that's
the
one!
D
That's
GA
and
it's
backed
by
a
controller.
Well,
a
controller
can
change
in
all
kinds
of
ways.
Potentially
you
can
swap
it
out
and
put
something
else
behind
it
as
long
as
it's
nominally
like
compatible
for
the
purposes
of
a
compliance
test,
that's
fine,
but
you
don't
getting
any
indication
what's
backing
the
API
really
exactly
yeah
exactly
so
yeah
we're
still
stuck
with
that
sort
of
conundrum.
I!
Don't
really
know!
If
there's
that
I
don't
have
an
answer,
that's
for
sure.
So.
B
I,
don't
know
what
the
I
mean,
it
seems
to
me.
We
have
three
options.
The
one
is
to
just
leave
everything
the
way
it
is
and
not
progress,
anything
beyond
what
it
is
at
the
moment
in
Federation.
So
if
the
API
supported
in
Federation
says
GA,
we
just
don't
change
it
because
it
is
GA,
config,
Maps
or
ta.
B
B
Well,
I,
don't
know
what
the
right
word
is
Wow.
What
is
the
thing
that
is
alpha
or
beta
or
GA?
Whatever
that
thing
in
the
API
is?
Is
it
a
version?
I'm
not
sure,
but
we
can
just
leave
everything
the
way
it
is
basically
do
not
move
anything
forward
or
we
can
move
everything
to
reflect
what
it
really
is
supporting.
So,
for
example,
if
we
currently
support
replica
sets
and
replica
sets,
our
have
just
gone
to
GA,
and
we
are
consistent
with
that
API,
which
was
finalized
that
GA.
We
could
reflect
that
in
the
API.
B
I.
Think.
That's
technically
more
consistent
and
and
and
kind
of
make
sense,
logically,
but
that
will
definitely
create
some
confusion,
perhaps
more
so
than
we
really
have
so
I
would
not
recommend
that
route
I'm
hesitant
to
recommend
the
route
of
like
rolling
everything
back
to
say
it's
all
alpha,
even
though
previously
it
was
called
GA
and
the
same
thing
in
kubernetes
is
called
GA,
but
we
supporting
precisely
the
same
API
but
we're
not
calling
a
TA.
That
seems
also
confusing
so
I'm
summary.
B
C
A
C
A
So
the
solution
that
I
have
is
the
currently
we
are
going
to
put
some
releases
right,
our
destinations
or
we
are
going
to
label
early.
So
we
label
that
leave
as
beta
whatever
versions
we
use
like
in
currency.
What
we
are
using
is
a
matching
version
named
2k
test
and
and
the
review
sufficient.
What
I
am
listing
is
that
this
is
lead
basically
maps
to
or
supports
a
particular
data
structure.
For
example,
this
least
I
have
labeled
as
1.9
dot,
zero
dot
beta,
something
and
it
Maps
21.9
dot.
A
B
Forget
that
precise
definitions
that
we
have
for
alpha,
beta
and
GA,
but
I'm
pretty
sure
alpha,
is
may
change
and
may
never
support
into
the
future
of
BT
is
not
quite
finished
yet,
but
will
progress
to
GA,
etc,
and
so
we
that
was
always
the
plan.
But
then,
of
course,
we
have
changed
the
plan,
so
I
would
be
inclined
to
call
all
the
releases
from
this
point
forward
called
the
release
alpha
and.
D
B
B
B
A
A
H
H
C
B
A
A
A
Is
the
reason
I
was
like
'man
starting
out,
this
will
set
me
afire
at
my
track.
I
have
not
listed.
There
has
been
one
category
of
issues
relevant
to
Coco
Co
beside
which
was
our
DAC
related
I
have
not
basically
included
those
in
this
listing,
or
this
will
probably
the
resources
up
enough
because
they
might
be
one
is
already
more
complicated
than
the
BSC
not
really
sure
how
exactly
we
might
want
to
saturate.
That
particular
problem.
Follow
you
or
control
claim
involves
a
different
control
change.
Spr
between
do.
A
So
this
is
an
apart
from
that
there
were.
There
were
many
documents,
not
many.
At
least
five
for
I
know
documentation
issues,
so
they
could
see
this
and
they
will
be
doc.
They're
also
good,
a
very
good
starting
point
to
yes
find
documentation.
Only
Nani
says
other
so
close.
If
this
is
also
a
good
starting
point
to
pick
up
documentation
relevant,
there
are
some
more
things
we
can
read
in
in
documentation.
I
would
probably
go
ahead
and
create
them.
So.
A
This
I
get
my
Tia
but
pointer
of
people
who
start
apart
from
that.
If
we
remember
there
was
this
backlog
that
we
used
to
maintain.
So
there
are
more
complicated
items
over
here
which
either
I
or
say
she
intends
to
pick
up
for
in
due
course
of
time,
but
this
is
one
location
the
she
sells
listed
as
passed
off
the
meeting
notes,
which
is
at
the
top
of
the
page
at
the
building
of
the
nose.
A
B
Can
first
cool
yeah
I
think
that
your
front
of
you?
Can
you
flip
back
to
that
list
of
issues
quickly,
they're,
adding
a
flag
for
time
out
seemed
both
some
old
useful
and
also
give
the
person
who
does
it
an
opportunity
to
test
cube,
fed
and
actually
deploy
a
federation
and
verify
that
it
works?
Add
the
at
the
time
our
flag
and
verify
that
that
works
and
then
we'll
set
them
up
for
doing
something
with
the
documentation
stuff
after
that,
once
I've
actually
deployed
something,
it
makes
sense.
Yeah.