►
From YouTube: Kubernetes SIG Multicluster 20180313
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
in
the
last
meeting
we
sort
of
there
was
a
question
raised
that
the
duration
for
the
cig
multi-gesture
meeting,
which
is
fortnightly.
Should
we
reduce
that,
but
the
status
quo
was
I
mean
it
was
recommended
that
we
should
maintain
the
status
quo.
But
if
there
is
anything
specific
that
we
need
to
discuss
can
be
put
on
agenda
beforehand.
B
A
A
A
B
A
But
I
my
viewpoint
to
this
was
that
there
has
pressure.
Industry
has
been
discussed
in
a
wider
perspective,
we're
outside
the
cig,
multi
cluster.
They
might
be
users
of
the
same
registry
where
we're
installing
it
with
given
key
a
test.
Cluster
I
mean
I'm
just
speaking
out
loud
I'm,
not
sure
if,
if
there
might
be
any
repercussions
of
for
the
same
I,
don't
have
any
any
objections
to
what
you're
suggesting
is
abnormal,
but
does
it
make
sense
to
put
this
out
to
a
wider
audience
and
take
a
viewpoint
on
this.
F
G
B
Been
proposing
sort
of
making
it
a
more
general
purpose
like
registering
any
API,
not
just
like
the
cluster,
and
that
to
me
is
kind
of
like
simplified,
like
you're,
cutting
away
the
things
that
are
specific
to
a
cluster
and
you're
just
having
listed
endpoints,
which
to
me
maybe
lends
itself
better
so
to
mean,
to
my
mind,
like
the
constraints
of
having
him
like
a
more
complex
solution,
which
suggested
not
using
crts.
That
being
relaxed,
suggests
revisiting
I.
F
E
Well,
no
I
just
add
that
it
is
something
that
we're
looking
into
so
Jonathan
and
I
have
talked
about
prototyping,
a
solution
to
use
CR
DS
instead,
and
it
would
help
simplify
kind
of
to
all
of
Marie's
points
there.
So
it
is
something
we're
looking
at
and
I
think
it.
It
can
be
done
and
should
be
done,
and
it
looks
like
also
Gregg
is
commenting
that
Madhu's
in
talks
with
Jonathan
as
well.
It
looks
like
Jonathan's
working
on
proposal
about
using
crts
as
well.
G
I
think
a
Quinton
I
believe
that
the
reason
we
didn't
do
it
initially
was
that
when
he
started
working
on
the
cluster
registry,
some
of
the
features
that
have
since
been
introduced
into
CR
DS
and
czar
B
now
being
proposed
as
features
that
are
going
to
be
headed
to
CR,
DS
and
say
the
next
kubernetes
release
were
not
didn't
exist,
yet
so
say:
versioning
Bao
validation.
G
F
Cool
yeah
I
mean
the
one
of
the
reasons
I
ask.
Is
that
we're
strongly
proposing
or
considering
using
crts
for
Federation
as
well,
and
so
if
there
were
good
reasons
why
you
didn't
use
them
for
cluster
industry,
those
may
apply
to
Federation
as
well,
but
it
sounds
like
whatever
those
reasons
were
no
longer
exists.
If.
B
That
is
partially
correct,
I
mean
the
versioning
like
backwards.
Compatibility
is
still
a
work
in
progress
and
I've
been
talking
to
say,
Phil
what
Rach
he's
been
like?
We
don't
really
have
a
timeline
and
when
that'll
be
done,
it
could
be
soon.
It
could
be
a
long
period
of
time.
So
there's
a
certain
amount
of
risk
for
Federation
to
be
assuming
that
we
can
rely
on
that.
But
it's
not
I'm
saying
it's
not
a
risk
worth
taking,
but
it
would
have
to
be
carefully
considered.
Okay,.
C
Okay,
I
have
one
point
here:
I
it's
fresh
yeah,
so
it's
in
context
of
Federation
v1
and
in
respect
to
Federated
ingress.
Okay.
So,
given
that
there
is
a
coup,
Bay
MCI
now,
which
works
for
Google
cloud,
a
g
c
e
and
g
ke.
Now
the
Federated
ingress
controller
currently
is
more
specific
to
this
Google
enrollment
right
now
so
I
have
I
am
seeking
opinion
chemi.
Shall
we
make
it
generic
enough
to
work
on
any
thing
and
just
federated
the
ingress
to
the
member
cluster
and
then
collect
the
load?
F
F
You
know
materially
better
than
federated
services,
for
example,
otherwise
people
might
just
use
those,
and
the
second
observation
is
I,
understand
and
I
haven't
been
following
the
conversations
very
closely,
but
I
understand
that
the
whole
concept
of
ingress
is
very
much
under
discussion
and
this
reconsideration
of
whether
we
did
did
it
the
right
way,
the
first
time
around
and
and
they
might
be
substantial
changes.
They
I,
don't
know
what
the
status
of
those
discussions
are.
But
you
might
want
to
look
in
on
that
and
find
out
what
the
status
is.
Okay,.
C
So
the
main
it
is
super
useful
for
the
federated
ingress
objects
to
be
behind
a
single
global
IP.
Without
that,
like
each
cluster
will
have
its
own
load,
balancer
IP
and
then
maybe
the
clients
need
to
need
to
target
the
correct
load.
Balancer
IP,
to
talk
to
instance,
I
think
yeah,
that's
a
useful
thing,
but
it
is
also
limiting
like
what
clusters
can
be
federated.
C
So,
okay,
I
think
it's
a
fair
comment,
but
the
current
in
current
implementation
is
kind
of
a
messy.
Like
you,
first
read
read
the
you.
You
first
find
out
the
inverse
you
ID
and
then
propagate
the
same
to
the
others
and
then
federate
the
federated
like
ingress
objects.
That's
like
looks
like
a
complex
thing.
You.
F
Know
I
implemented
that
curve
actually
way
back
when
and
yes
is
complex.
If
it's
basically
the
well,
it
was
the
only
reasonable
way
to
get
Google
Cloud
ingress
to
work
in
that
sort
of
environment
without
making
significant
changes
to
the
actual
ingress
controllers
running
the
clusters,
which
I
think
I
don't
know
if
Nikhil's
on
the
call
but
Sony
killed,
took
a
different
approach
and
basically
programmed
the
ingress.
F
The
GCP
load
balancers
inspired
his
tool,
instead
of
leaving
it
to
the
ingress
controllers
to
do,
and
then
I
think
you
had
to
make
some
changes
to
the
ingress
controllers
in
the
clusters
to
effectively
disabled
what
they
were
doing.
So
that
would
be
an
alternative
approach
if
you
wanted
to,
but
it
sort
of
moves
the
complexity
from
one
place
to
another,
but
but
it
is
a
possible
alternative
approach.
F
C
H
Yes,
so
sorry,
so
I'm
I'm
from
Cisco
and
I
work
on
a
team
currently
in
the
area
of
edge
computing.
So
yes,
one
thing
that
we're
thinking
a
lot
about
is,
you
know
related
to
your
question
is
how,
in
like
rock
we're
gonna
take
so
outside
Google
cloud
outside
AWS,
can
we
configure
multi
cluster
load,
balancing
so
yeah
we're
right
at
the
beginning
of
discussions
on
how
we
could
address
that,
and
you
know
different
different
options
there,
but
but
yeah.
That's
that's
kind
of
my
personal
investment
in
this.
F
H
F
Wonder
if
it
doesn't
sense
for
you
me
and
Shashi
and
herb
else
is
interested
to
have
a
little
breakout
meeting
about
this
particular
topic,
federated
ingress
in
particular,
because
it's
kind
of
been
on
the
cards
for
a
long
time
and
it
might
be
good
to
kind
of
answer
whatever
questions
they
might
be
hanging
around.
So
we
can
do
something
there
yeah.
F
Okay,
so
we
will
just
invite
the
the
I
think
we
have
a
mailing
list
for
the
Federation
working
group.
If
not,
we
can
just
use
the
signal
tea
cluster
meeting
mr.
Houston,
but
everyone
and
just
put
a
little
caveat
there-
that
only
if
you're
interested
in
this
specific
topic,
at
least
everyone
knows
the
meetings
have
been.
F
G
I
believe
you
talked
about
series
for
the
cluster
registry,
which
would
have
been
probably
the
largest
update
on
the
cluster
registry
site.
That
I
had
was
being
in
progress
of
writing
up,
trying
to
write
up
the
pros
and
cons
of
that
as
I
saw
them
and
then
hopefully
sending
that
out
to
the
sig,
hopefully
soon
a
bit
a
bit
under
the
weather.
So
it's
not
exactly
been
going
quickly,
but
hopefully
this
week.
A
Okay,
I
can
also
give
a
very
quick
update
about
what
we
did
last
week
last
last
week,
so
we
saw
a
lot
of
issues
or
requests
around
RBA
see
how
could
a
Fed
handles
rbac
access
to
the
clusters,
so
I
was
able
to
work,
give
a
workaround
or
give
an
alternate
path
where
users
can
actually
have
a
different
mechanism
rather
than
using
RBA
see
a
normal
earth
could
be
used
in
it
join
or
and
join.
So
that's
done.
A
We
were
able
to
finish
the
rendering
mechanism
update
that
is
done,
and
we
also
have
been
able
to
update
and
complete
the
testing
around
the
azure
DNS
provider,
which
the
testing
is
completed
and
the
peer
should
be
merged
in
and
they
okay.
So
apart
from
this
yeah
I
mean
this
is
what
we
have
completed
in
master.