►
From YouTube: Kubernetes SIG Multicluster 20181204
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
Then,
in
terms
of
maybe
I'll
and
start
with,
given
CI
CLI
that
we
had
so
we
put
it
out
for
customers
to
try
it
out.
There
are
people
who
find
it
out
and
also
link
to
an
issue
in
the
meeting
notes
that
has
more
feedback
from
customers.
They
shows
they're
learning
into
the
status
where
they
are.
There
are
a
few
customers
who
are
trying
it
out
a
few
issues.
B
B
B
So
at
a
high
level,
I
can
explain
the
API
we've
come
up
with
to
come
up
with
two
API
resources.
One
is
called
one.
What
we
call
in
multiply,
starting
this
and
one
we
are
calling
multi
cluster
service,
and
so
this
is
where
the
the
model
is
a
bit
different
from
the
CLI
and
then
compare
the
two
as
people
know,
the
C
in
the
CLI
model
you
give
an
ingress
pack
and
the
CL
is
thinks
that
ingress
to
all
the
clusters
and
that's
how
you
get
multi
clustering.
B
This
in
with
this
API
there's
multi
clustering
this,
but
that
is
not
synch
to
target
clusters.
There's
a
many
clusters
service
that
is
synced
to
all
the
clusters,
and
that's
the
advantage
of
this
that
we
don't
have.
We
don't
have
to
force
users
to
have
to
create
service
and
all
the
clusters
that
is
done
automatically
using
multi
cluster
servants
in
CLI.
It
was
on
the
users
to
have
to
create
a
service
and
all
the
clusters
ensure
it
has
the
same
node
foot
and
then
ingress
would
be
taken
care
of
by
the
CLI.
B
So
the
advantage
here
is
you
don't
have
to
do
anything
in
the
clusters
you
have
deployments
and
then
you
create
multi
cluster
service.
That
is
saying
to
all
the
clusters
and
you
create
multi
color,
studying
else
that
then
configures
Google
cloud
load
balancer
to
give
you
this
multi
cluster
load
balancing,
and
so
we
have
cluster
selector
support
as
well.
In
CLI
it
was
all
clusters,
you
will
give
a
list
of
cube
config
and
it
used
to
use
all
of
those.
You
can
have
cluster
selection,
so
multi
cluster
service.
B
If
you
look
at
the
API,
it
has
a
concept
of
destination
clusters
and
say
again,
which
is
based
on
label
selection.
So
it's
flexible,
you
can
use,
you
can
select
a
list
of
clusters
based
on
that
level.
Selector
and
your
service
would
be
sync
to
those
Tundra
clusters
and
by
keeping
the
cluster
selection
on
service.
The
advantage
is
that
you
can
have
different
paths
on
the
same
particular
selling
list
pointing
to
different
clusters.
B
For
example,
you
can
have
/foo
in
clusters
a
and
b
and
slashed
by
interest
as
B
and
C,
and
you
can
have
those
cluster
selectors
are
independent
and
can
select
different
clusters
for
different
paths.
So
that's
another
advantage.
You
get
yeah,
so
that
is
most
of
it.
The
API,
if
you
look
at
it,
wendy
glossary
Gnaeus
looks
very
similar
to
a
single
cluster
ingress.
It
has
very
similar
config
just
that
in
path
when
it
points
to
a
service
in
multiply
standing.
B
B
A
C
A
B
E
B
C
B
A
So
I
just
wanted
to
extend
what
you
are
saying.
So
we
have
similar
API
is
relevant
to
networking,
so
we
have
a
service
TNS.
We
have
multi
cluster
DMS
and
I
ingress
API
in
Federation,
v2
right
now
implemented
and
the
controller's.
So
it
might
make
sense
for
you
to
maybe
at
least
have
a
look
at
that
and
s
Quentin
mentioned,
maybe
not
necessarily
as
part
of
this
meeting
but
somewhere
offline.
We
can
sync
and
figure
out
where
we
differ
or
if
it
makes
sense
to
collaborate
together.
Sure.
B
Yeah
and
since
you
I'm
really
familiar
with
that,
API
would
be
great
if
we
can
take
a
look
here
as
well
and
we
can
figure
out
loquats
is
it?
Does
it
still
look
very
similar
and
those
are
simple
cosmetic
changes
or
does
it
look
like
fundamentally
very
different
I
doubt
that,
but
would
be
great
like
if
you
feel
it's
very
different,
I'd
recommend?
Yes,
we
should.
A
A
How
we
do
like
setting
aside
the
API
or
the
fields
in
that
and
all
the
mechanism
that
we
follow
in
condition
v2,
is
that
federated
resource
is
we
created
specifying
the
intent
the
user
wants,
so,
for
example,
for
ingress?
What
user
wants
is
so
a
simple
mechanism
of
propagating
in
Grace's
will
be
that
whatever
ingress
is
created
in
addition
that
whole
plane
same
but
same
specification,
it
is
replicated
in
which
other
clusters.
A
The
additional
point
is
that
the
interest
API
will
get
all
the
English
objects
will
get
some
IPS
associated
with
it
are
some
rules
associated
with
it
per
cluster,
so
the
user
can
create
the
intent
of
collecting
all
these
resources
and
then
programming,
some
external
DNS
server
or
some
external
entity
with
this
collected
information.
So
that's
how
we
are
trying
to
do
it.
A
It's
very
similar
for
services
also
and
services
in
fact,
are
very
similar
to
what
they
were
in
addition
to
even
so
yeah
a
user
creates
a
service
and
the
services
are
propagated
and
then
service
information.
Some
information
in
services
in
each
of
the
clusters,
which
is
the
IP
against
the
DNS
name,
which
is
updated,
which
is
collected
in
Federation
control,
plane
and
the
Federation
control
plane
hands
over
this
responsibility
to
an
external
entity
to
update
the
DNS
entries
and
some
global
DNS
server.
So
that
is
how
we
are
trying
to
do
it
in
condition.
V2.
B
A
So
that,
in
case
for
us
is
also
at
that,
we
have
been
thinking
about
trying
to
figure
out
the
solution
where,
for
example,
you
program
I,
invest
in
a
single
cluster,
using
paths
right
to
differentiate
the
different
different
services
or
different
entities
which
might
be
served
from
that
ingress
to
map
it
somehow
to
some
similar
DNS
based
mechanism.
But
we
haven't
been
successful
so
far.
So
it's
a
simple
collection
of
whatever,
as
you
mentioned,
are
something
likely
right
now
and
then
updating
it
in
location,
yeah.
B
A
Integration,
I
don't
have
the
first-hand
information
about
this.
She
is
the
person
who
implemented
and
I.
Don't
think
he's
on
the
call
right
now,
but
what
he
has
tried
out.
His
implementation
is
with
all
DNS
as
an
external
server
and
with
some
OpenStack
load
balancers
is
what
I
I
remember.
He
tried
out
his
limitations.
B
A
Yeah
yeah
I
understand
so
so
also
there
is
one
more
hardening
which
is
like
individual
clouds
provide
load
balancers
for
different
capabilities,
so
it's
difficult
to
have
a
unified.
This
thing,
having
said
that,
I'm
sure
that
we
can
try
to
come
up
with
some
common
solution
across
some
platforms,
at
least,
if
not.
B
A
C
Yeah
apron
I
was
going
to
suggest
I
think
it
would
be
super
useful
because,
as
far
as
I
understand
the
Delta
between
what
is
currently
in
Federation
and
what
this
would
provide
in
the
future,
namely
Google
cloud
load,
balancer
integration
is
relatively
small,
I
mean
modulo
all
of
the
cross
cloud
complications.
You
know
if
you
have
clusters
in
different
cloud
providers
and
they
can't
all
sit
behind
the
Google
cloud
load.
Balancer
I
think
it
would
be
useful
for
us
to
just
build
that
anyway.
B
This
is
what
often
exploit
your
brainstorming
like.
We
just
wanted
to
come
up
with
something
that
would
work
and
see
and
get
feedback
on
that.
So
I
think
this
is
the
model
we
came
up
with.
We
wanted
to
see
if,
like
what's
the
community
feedback
on
this
I
said
like
we
have
like
the
model
is
a
bit
different.
Then,
like
I,
said
what
we
were
doing
in
fication
v1,
we
don't
think
multi-class
trying
to
say
do
you
think
you
did
based
on
cluster
selection,
okay,.
C
B
Is
correct,
one
detained
primary
different
Syrians
users,
those
and
don't
have
to
create
services
in
all
the
cluster.
So
that's
a
the
environment.
We
had
M
CLI
and
in
Federation
b1
that
users
have
to
ensure
there's
a
service
exists
with
the
same
node
put
in
all
the
clusters
and
like
that's,
the
feedback
we've
got,
which
is
which
is
obviously
a
pain
for
them,
and
so
this
API
models
basically
alleviates
that,
like
that's
terrifying,
okay,.
C
B
It
automatically
creates
the
services
in
target
clusters
based
on
multi
cluster
service
and
then
multiplied
studying
this
points
to
multi
cluster
service
and
it's
okay,
so
users
create
multi
cluster
service
at
this
multi
cluster
level
and
then
the
controller
syncs
those
services
in
target
cluster.
So
users
don't
have
to
create
services,
but
that's
exactly.
B
In
women
service
we'd
be
one
Federation.
Sorry,
we
did
have
this
service
concept,
which
did
this
global
or
cross
cluster
service
discovery.
So,
yes,
it
did
sync
services
and
also,
if
you
just
wanted
to
use
multi
clustering
this
without
that
cross
cluster
service
discovery
without
having
to
expose
your
service
outside,
don't
like
two
other
clusters
and
do
DNS
routing.
Is
it
in
that
case
users
had
to
create
those
services
and
even
in
service
in
v1
Federation?
When
users
wanted
service
to
be
created
in
all
clusters
used,
there
was
this
problem
that
they
would
run.
B
They
would
run
into
foreign
output
that
the
same
node
would
could
be
already
taken
by
someone.
So
in
this
case,
I
would
point
out
so
GCL
B
has
this
new
feature
called
Network
endpoint
groups,
so
this
uses
that
which
does
direct
load
balancing
two
parts?
So
you
don't
need
notes
now.
So
this
is
one
like
that
new
one
with
this
APM,
you
can
take
advantage
of
that.
C
Okay,
I
actually
thought
v1.
When
you
find
a
load
balanced
service,
then
you've
got
a
load,
balancer
and
DNS,
and
all
of
that
other
stuff.
But
if
you
just
didn't
specify
that
you
didn't
get
all
of
the
other
stuff
and
if
you
didn't
specify
a
node
port,
you
got
the
automatically
allocated
node
port,
so
in
which
case
I
think
it
is
equivalent
to
a
multi
cluster
service
if
I'm
not
mistaken.
So.
B
There
was
the
DNS
routing
and
yes,
there
was
automatic
node
put,
but
we
did
like
customers
run
into
a
lot
of
issues
where
that
was
already
taken
and
then,
like
we
discussed
these
design
questions.
Where
what
does
the
controller
do,
then
it
restarts,
or
if
you
add,
a
new
cluster
and
like
first
you
had
three
clusters
and
the
Federation
service
controller
was
able
to
find
a
note
that
exists
in
all
three
clusters.
But
then
now
you
add
a
new
cluster
and
then
output
doesn't
exist
there.
So
then
it
will
constrain.
A
A
B
A
B
A
A
I
was
saying
so:
I
I
might
be
a
slightly
wrong
on
this,
but
I
was
just
having
a
look
at
the
v2
API
for
multi
cluster
ingress.
So
what
it
does
is
it
basically
just
collects
all
the
ingress
IPS
for
all
the
clusters,
so
you
create
a
service
in
each
question.
You
have
this
server,
you
have
this
cross
cluster
service
discovery
mechanism
and
then,
additionally,
you
create
a
multi
cluster.
A
We
call
it
the
ingress,
dns
multiplier,
and
this
dns
is
the
name
of
the
API
so
which
says
the
intent
is
for
a
given
in
this
name,
which
will
be
common
in
all
of
the
clusters.
Just
collect
the
English
endpoints,
so
each
question
will
have
ingress
IP
or
the
external
IP
through
which
user
can
reach
this
in
this,
so
DNS
records
are
created
for
that
personal
IP.
Also
in
case
you
don't
want
to
have
external
endpoints
or
a
load
balancer
type
services.
F
A
A
So
that's
what
I
was
trying
to
say,
so
you
can
have
a
service
defined
as
a
cluster
IP
only
service
which
does
not
have
an
external
load
balancer
type.
Then
you
can
create
a
ingress
by
its
side
and
the
ingress
rule
can
be
such
that
a
TV
direct,
the
traffic
to
this
particular
service
and
the
ingress
can
have
an
external
IQ
right.
B
B
B
F
A
Would
it
make
sense
nikhil?
At
least
you
I
mean
we
can
set
some
time
aside
I,
as
I
said,
I
did
not
implement
any
of
this,
so
I
might
be
mistaken
on
something
and
I
would
not
have
the
very
deep
understanding
of
each
of
these
guys,
which
are
implemented
in
traditional
b2,
but
she
has
that
so
I
ensure
that
he
attends
that.
Next,
all
the
special
value
will
be
set
up.
I
wasn't
sure.
B
C
I
think
it'd
be
really
useful
if
we
could
unify
these
in
such
a
way
that
that
we
have
a
at
least
a
single
idea.
Even
if
we
have
different
implementations
of
that
API,
a
single
API
that
addresses
the
use
cases
that
Christian,
which
I
think
are
perfectly
reasonable.
You
know
don't
use
cluster
ip's,
don't
use
external
DNS
raft
ingress
directly
to
the
pods
in
the
cluster,
which
is
essentially
my
understanding
of
fuel
requirement,
and
it
doesn't
sound
like
the
Delta
between
that
and
what
we
have
is
very
large.