►
From YouTube: Kubernetes SIG Multicluster 2020 Apr 28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
Give
me
just
a
sec
here,
so
I've
been
working
on
some
prototyping
and
I
figured
it'd,
be
cool
to
start
with
a
demo
of
what
it
would
actually
look
like
if
we
implemented
the
cap
with
a
very
hacky
implementation
of
the
cap,
but
I'm
having
some
technical
difficulties,
I
need
to
join
from
another
computer.
Okay,.
A
While
you
do
that,
since
we
last
talked
about
the
multi
cluster
services,
kept
I
realized
that
there
needed
to
be
like
some
maintenance
done
on
the
list
of
like
cig
multi
cluster
leads
and
thought
about.
You
know
what
next
steps
would
be
on
the
cap.
I
I,
think
where
I'm
at
at
this
point
is
that
we
can
probably
commit
it
in
a
provisional
state
and
just
keep
it
there.
While
we
prototype.
B
B
Okay,
awesome,
okay,
so
I'm
trying
to
share
my
screen.
It
says:
host
disabled
attendees
screen
sharing;
okay,.
B
Okay,
so
here's
my
very
blurry
command
line,
so
I
just
figured
I'd
one
by.
B
Sound
better
at
all,
yeah,
okay
cool,
so
so
I'm
just
gonna
run
through
the
scripted
demo,
but
it's
all
live
again,
stack
on
top
kind
of
cluster
watch,
your
TV
client
clusters,
so
first
I'm
provisioning,
networking
across
both
clusters,
thanks
to
a
tip
from
Antonio,
yeah
and
now
just
kind
of
deploying.
This
is
very
similar
to
a
kind
of
demo.
The
concept
I
did
earlier:
I'm
just
deployed
deployed
a
service
in
each
cluster.
B
The
service,
basically
just
has
a
simple
server
that
responds
with
the
IP
address
of
the
pod
and
a
tag.
So
this
one
has
cluster
a
there's
another
one
cluster
B,
but
the
new
addition
is
the
service
export,
which
is
just
that
name,
not
resource.
We
talked
about
a
couple
weeks
ago
and
other
than
that.
This
is
just
a
normal
deployment.
So
we'll
start
by
keep
going
and
we'll
keep
going
and
there's
also
a
pinger
in
each
cluster
that
just
basically
logs
requests
it
makes
to
the
service
and
the
response.
B
So
in
this
case,
cluster
a
is
talking
to
cluster.
A
cluster
B
is
talking
to
cluster
B,
and
we
can
see
that
we've
got
regular
endpoints.
No
change
but
now
I'm
exporting
the
end
points
from
each
cluster
and
modifying
them
just
a
little
bit,
and
so
this
these
exported
endpoints
I've
added.
Just
this
multi
cluster
kubernetes
that
I
go
slash.
There
was
name
label
which
looks
a
lot
like
how
endpoints
work
today
or
employ
places
work
today
with
the
kubernetes
dot
io
service
name.
This
is
just
the
multi
cluster
dr.
B
B
So
now
we
can
see
here,
there's
the
original
and
then
the
to
imported.
The
original
is
for
the
service,
the
important
for
the
imported
service,
but
now
we're
going
to
actually
create
the
import.
So
this
is
my
cell
script
as
the
as
the
implementation,
the
controller
I'm,
actually
creating
the
full
all
of
the
resources
necessary
for
import
and
so
just
applying
that
service
import.
B
So
that's
kind
of
the
simplicity
of
the
experience
for
consuming
this
that
that
I
want
to
shoot,
for
this
is
implemented
with,
with
Q
proxy,
actually
updated
to
understand
these
resources,
which
was
a
good
experience
to
understand
what
it
will
actually
take
to
properly
do
this
with
Q
proxy,
which
is
definitely
doable
but
but
significant
anymore
work.
So
any
questions
about
that
and
then
there's
some
topics
that
figured
we
could
discuss
today
on
the
K.
A
B
B
Yeah
so
I
think
I
think
we
basically
so
there's
a
there's
a
few
things,
so
one
defining
DNS,
but
that
seems
like
it's
not
going
to
be
that
complicated,
because
I
imagine
that
we
just
want
DNS
to
look
like
it
does
today
for
in
cluster
services.
The
I
think
the
bigger
thing
is.
We
want
to
actually
define
some
things
around
in
this.
B
This
is
what
I
really
want
to
talk
about
today,
but
I
really
wanted
to
find
some
some
characteristics
about
how
those
cross
cluster
services
are
merged,
because
I
think
that
kind
of
dictates
how
we
go
about
what
we
do
next,
but
I
think
the
real
next
step
would
be
to
try
making
a
a
solid
queue
proxy
based
implementation.
So
this
can
actually
work
and
then
figure
out
the
the
testing
strategy
for
how
we
you
know
what.
Basically,
what
would
it
take
for
us
to
consider
this
implementable?
A
B
B
So,
basically
cube
proxy
turns
all
of
its
services
and
endpoints
into
internal
based
service,
port
structs
like
that
and
endpoints
into
its
internal
structure.
Basically,
what
I've
done
is
I've
actually
added
the
service
import
as
a
first
class
API
construct
and
keep
proxies,
watches
it
just
like
services
and
then
also
knows
how
to
associate
the
multi
cluster
kate's
that
IO
service
name
label
with
the
imported
service
instead
of
the
way
that
it
does
the
traditional
service
name
label
with
red
standard
in
cluster
services.
B
A
B
B
A
B
I
had
a
build
of
cube
rad
and
it's
the
key
API.
So
that's
that's!
A
question
is
how
we
go
about
that
part
of
the
reasoning
is
the
same
reason.
I
think
endpoints
lights
went
with
that
route
as
well
it
it
does
perform
better
than
crts
and
cute
proxy,
basically
to
watch
things.
The
CR
DS
need
to
be
created
before
queue
proxy
starts,
and
it's
one
of
those
components
with
it.
B
That
can
be
tricky.
So
I've
been
thinking
about
other
models
to
go
about
this
like
I
to
me,
it
seems
like
this
should
be
a
CR
D,
but
in
order
for
that
to
work
with
Q
proxy,
it
kind
of
seems
like
we're
going
to
need
some
kind
of
live
plug-in
model
so
that
we
can
like
start
a
sidecar
or
something
after
key
proxy
starts
yeah.
D
A
I,
don't
think
that
I
don't
want
it
to
mean
that
if
you,
if
you
do
something
that
touches
the
proxy
that
it
has
to
go
into,
the
cube
API
yeah
in
and
certainly
the
the
API
type
suggests,
the
CR.
Do
you
like
it
does?
It
doesn't
need
to
be
anything
but
a
CRT
I
have
no
idea
what
degree
of
plug
ability
there
is
within
Q
proxy
or
that
may
be
planned.
A
B
So
I've
been
looking
at
that
and
currently
I,
don't
believe,
there's
any
really,
but
there's
there's
all
the
right
things
to
edit,
like
there's
a
layer
within
Q
proxy.
Whereas
if
you
can
create
like
a
service
port
and
set
of
endpoints
trucks,
it
will
just
do
its
thing
across
all
of
the
different
implementations.
B
So
the
I
guess
the
question
is
like
today
it
looks
like
you,
you
kind
of
need
to
plan
things
through,
as
you
know,
as
per
those,
if
you
want
it,
if
you
want
to
put
them
up
with
Q
proxy,
but
but
we
could
fix
that,
and
then
this
could
be
a
CR
D,
because
this
this
would
be
great
if
it
wasn't
compiled
it.
Mm-Hmm.
A
B
F
No
objections
to
making
Q
proxy
friendlier
to
these
sorts
of
extensions
as
long
as
they're,
not
you
know
ridiculously
horribly
invasive
on
to
proxy
I
have
to
work
both
hats
here,
so
I
in
general,
Q
proxy
could
do
with
some
cleanup
work
anyway.
In
fact,
I
was
thinking
about
spending
part
of
an
intern
on
some
of
that
the.
B
Awesome,
yeah,
and
and
to
its
credit,
I've,
got
to
say
like
actually
popping.
This
in
was
not
that
difficult,
so
that
that
gives
me
hope
that
that
whatever
we
propose
here
to
make
it
more
extensible
is
there's
not
going
to
be
that
painful.
You
know
with
a
grain
of
salt,
but
yeah
this
actually
making
this
work
was
was
not
too
difficult
at
all.
Is.
G
B
G
You
don't
actually
have
to,
though
I
mean,
if
we're
the
imported
ones
you
do,
but
for
the
exported
ones,
they're
reliably
named
and
there's
a
hash
function
that
they
use
for
what
the
service
name
is,
and
you
can
program
the
rule
to
point
to
the
existing
service
that
you
have
for
the
local
cluster.
But
only
do
the
imported
rule.
The
Hewitt
program
for
the
endpoint
sizes
and.
B
G
So
the
endpoint
slices
end
up
getting
programmed
into
a
separate
chain,
and
that
chain
could
be
reused.
I've
done
this
previously,
where
the
chain
naming
is
reliable,
so
that,
if
you
wanted
to
program
a
like
your
super
cluster
rule,
you
would
program
a
chain
for
the
imported
set
of
endpoints.
Oh
yeah,.
B
Table
site
on
the
on
the
kubernetes
side,
you'd
need
to
watch
all
the
slices
and
so
you'd.
We
there
would
be
a
lot
of
processing
there.
Basically
I
think
we
would
probably
end
up
using
twice
as
much
memory
for
for
both
Cube
proxies,
but
then
on
the
IP
table
side,
yeah
I
think
implementation
wise,
it
would
work.
The
biggest
issue
is
probably
contention
if
both
if
we've
got
two
processes
trying
to
update
IP
tables
at
the
same
time,
actually.
F
The
biggest
problem
I'd
be
surprising,
but
it's
probably
going
to
be
memory.
Iptables
has
to
load
the
entire
rule,
set
parse
it
into
native
structures,
insert
when
it's
going
to
insert
then
force
it
back
down
into
the
kernel
so
for
a
very
large
rule
set
that
can
be
tens
or
scores
of
megabytes
of
memory
which
you
have
to
provision
for
in
your
daemon
sets
which
become
really
obnoxious
or
you
or
you
oome.
Your
system
ask
me
how
I
know
right.
G
But
what
I'm
suggesting
wouldn't
be
duplicate
any
chains?
You
would
be
using
the
chains
that
are
programmed
by
coup
proxy
and
then
adding
supplemental
chains
for
the
imported
services.
So
you
know
you'd,
you
wouldn't
have
any
anything
that
wasn't
a
no
like.
There
would
be
no
duplicated
rules.
I
guess
that's
my
point
and
as
far
as
watching
the
endpoint
slices,
you're
you're,
taking
the
imported
endpoint
slices
and
creating
those
in
the
cluster
that
you're
doing
the
import
into
right
right,
that's
right!
B
G
Just
an
idea,
like
I,
think
that
you'd
be
able
to
have
a
separate
process
so
you're
not
stuck
with
whatever
you
know.
The
timeline
is
for
coup
proxy
to
implement
this
stuff.
I
think
that
you
could
actually
do
a
second
daemon
set
limit
it
to
watching
your
endpoint
slices
for
what's
been
imported
and
then
reusing
the
the
chain
that
they
have
for
the
in
cluster
service.
G
B
B
F
Yeah
I
mean
we've
talked
about
this
a
little
bit
I'm
super
down
on
having
multiple
things
trying
to
coordinate
with
iptables,
just
because
I
front,
all
of
the
iptables
bugs,
but
it's
for
lower
scale.
It's
probably
much
less
of
an
issue
than
for
higher
scale
right
and
so
I
may
be
over
biasing.
My
thinking
for
some
users,
based
on
the
experiences
of
others,
do.
F
F
A
I
think
I
think
it
might
be
easier
to
justify
a
need
for
the
proxy
cap.
If
we
had
something
out-of-band
and
we
could
point
to
you
know
at
X
scale,
it
gets
silly
to
do
this
in
this
way.
B
B
F
Had
semantic
data
from
the
from
the
IP
mask
agent
experience
where
it
was
writing
a
very
small
number
of
IP
tables
rules,
but
the
number
of
rules-
you're,
writing
doesn't
really
matter.
The
number
of
rules
you're
reading
does
and
it
was
at
the
order
of
thousands
of
rules.
It
started
to
consume.
It
felt
like
nonlinear
memory,
but
it
was
a
mcdhh
Atta
not
not
rigorously
gathered.
B
Okay,
so
basically
we
need.
We
need
a
good
test
case,
for
this
is
just
what
I'm
Gary
probably
either
way.
I
guess
one
one
benefit
to
the
to
build.
It
I
mean
the
obvious
benefit
to
building
in
two
key
proxies
that
presumably
would
scale
as
high
as
my
sizes
do
for
in
cluster
services
across
multi
multi
cluster,
without
adding
a
new
bottleneck,
but
I
absolutely
agree
pulse
point
that
I'm
an
easier
time
building
an
opposing
and
extension
mechanism.
If
we
have
a
real
use
case
to
point
to.
B
A
A
B
So
here
let
me
let
me
share
a
few
things,
so
I
guess
going
back
to
a
kind
of
the
actual
implementation.
We
we
kind
of
touched
on
a
few
things
last
time
about
you
know
having
services
in
each
cluster,
export
and
being
kind
of
merged
into
the
super
cluster
service,
but
I
think
there's,
there's
some
open
questions
about
the
actual
mechanics
of
how
that
works
and
and
how
that
should
work
and
what
happens
if
to
exported
services.
B
B
All
right
so
when
merging
these
services
across
clusters,
I
think
the
the
main
goals
that
I
want
to
highlight
here
are
retaining
all
of
the
existing
service
capabilities
that
actually
matter
that
people
would
expect
to
use
in
a
multi
cluster
environment.
Having
some
intuitive
conflict
resolution,
I,
don't
think
we're
just
gonna
say
up
front
I,
don't
think
it's
gonna
be
possible
to
automatically
merge
any
two
services
with
disagree
expect,
but
it
should
be
very
easy
to
understand
why
anything
failed.
B
We
try
our
best
to
remain
functional,
so
that
said,
we
try
to
do
the
automatic
right
thing
and
intuitive
way
whenever
we
can
and
then
we
want
to
be
able
to
support
incremental
role
in
there's
not
going
to
be
any
nice
way.
You
know,
even
if
we
have
a
good
orchestration
over
this,
there's
not
going
to
be
any
nice
way
to
do
atomic
updates
across
multiple
clusters,
so
you
should
be
able
to
change
characteristics
of
a
service
in
cluster
a
without
breaking
an
export
from
cluster
B,
if
only
temporarily.
Well,
you
update
everything.
B
Failure
States
handled
good
question
so
I
think
that's
something
we
should
we
should
talk
about,
but
first
I
want
to
talk
about
what
the
what
could
cause
it
or
actually
maybe
we
can
dig
into
that
now.
So.
B
So
I
guess
yeah.
Let's,
let's
talk
about
the
things
that
we
probably
don't
care
about
first,
so
we
can
kind
of
agree
because
I
think
the
answer
will
depend
on
what
caused
the
failure.
So
I'm
I,
don't
think
any
of
these
immediately
matter
like
these
will
be
kind
of
ignored
properties
on
the
service.
That's
exported,
don't
think
they
necessarily
impact
this
ever
baited
service
at
all.
B
So
cluster
epi
and
the
kernel
Epis
externally,
simple
traffic,
health
check,
load,
balancer
any
of
the
load,
balancer
config
and
selector
are
definitely
local
servus.
We
only
care
about
the
endpoints
of
a
cluster
IP
service,
so
those
don't
really
play
in
and
then
published,
not
ready
addresses.
We
probably
do
want
to
figure
out,
but
not
until
we
figure
out
how
we're
going
to
support
headless
multi
classical
headless
services,
so
it
doesn't
make
sense
for
a
for
a
super
cluster
IP
type,
any
disagreement
there.
C
Just
maybe
a
quick
question
for
the
health
check:
are
you
actually
performing
a
health
check
on
the
important
source?
How
do
you
determine
when
one
of
the
remote
endpoints
provided
by
the
endpoint
slice
would
become
unhealthy
and
no
longer
route
traffic
to
it
from
consumers?
Good
point,
good
question,
so
I
think
first.
B
B
A
B
Right
or
I
mean
you
could
get
cluster
IP
or
anything
that
ends
up
with
the
cluster
IP.
So
like
the
load,
balancer,
no
port
service
would
be
treated
as
a
cluster
IP.
Okay,
I
think.
The
big
thing
is
external
name
doesn't
doesn't
make
sense
right,
like
I,
don't
know
what
that
would
look
like
in
there
when
merged
with
another
service
than
a
super
cluster,
but
yeah
we
just
care
about
the
cluster
IP
part
of
a
service
and
it's
endpoints,
so
I
think
then,
let's
talk
about
the
properties.
B
B
The
there
would
be
basically
a
cluster
spec
for
each
cluster
with
some
implementation,
specific
unique
identifiers,
so
whatever
you're
using
is
your
cluster
registry
would
would
use
this
identifier,
and
this
would
be
the
same
identifier
that
we
use
as
the
source
cluster
label
on
on
the
endpoint
slice.
And
so
the
things
that
we
do
care
about
are.
B
Alright
I
be
family.
Of
course
we
can't
really
merge
across
IP
families,
ports,
service,
ports,
session
affinity
and
topology
keys.
So
I
think
it
would
be
probably
helpful
to
talk
about
how
you
know.
I
threw
down
some
ideas
of
how
these
should
work,
but
it
should
I
think
we
should
just
talk
through
what
these
actually
could.
Look
like
so
IP
family
seems
fairly
straightforward
to
me
like
if
you
can't
merge,
ipv4
and
ipv6
service,
because
IP
tables
can't
rub
between
the
two.
F
B
F
B
F
B
B
Cool
so
service
ports,
so
the
thinking
there
was
you
may
want
to
change
service
ports.
So
we
could.
We
could
just
export
the
union
of
ports
from
all
exported
clusters,
so
that
would
be
what's
imported.
The
problem
there
is,
if
two
clusters
declare
a
service
court
with
the
same
name
but
different
ports,
I'm,
not
sure
how
I
would
reconcile
that
so
I
think
the
right
thing
to
do
would
probably
be
to
fail,
but
it,
but
this
would
be
one
of
those
hard
failures
where
we
wouldn't
know
what
to
do
so.
B
We
would
have
to
report
an
error
on
the
export
and
then
we
need
to
decide
how
that's
handled
I.
Think
the
safest
thing
is
to
probably
stop
performing
any
action.
So
no
updates
I,
don't
think
we
should
delete
a
service
because
it
was
or
I,
don't
necessarily
think
we
should
delete
a
service
because
we
no
longer
know
what
the
right
thing
to
do
is.
But
at
the
same
time
we
should
assume
that
those
endpoints
will
become
stale
very
quickly.
So.
A
B
Right
so
yeah
I
mean
the
initial
thinking
is
that
all
service
exports
involved
in
this
conflict
would
get
a
condition
set.
Basically
that
there's
a
yeah
with
some
kind
of
description
of
what's
conflicting
and
I,
think
maybe
a
rollup
of
of
the
conflicting
property
across
clusters.
I,
don't
know
you
know,
we
don't
know
which
ones
the
problem
cluster,
because
yeah
there's
there's
no
priority.
A
Yeah,
it's
it's
definitely
worth
thinking
about
what
what
kind
of
trouble
you
can
get
yourself
in
by
rolling
up
the
property
across
all
clusters.
Right,
like
you,
could
it's
easy
to
imagine
a
scenario
where
you
could
run
out
of
space
a
net
CD
to
actually
put
that
information
in
there
at
some
amount
of
scale
it?
A
B
A
B
Cool
I
will
do
that
yeah,
so
I'll
stop
updating
the
initial
PR
today
and
start
anyone
cool
and
and
session
affinity
that
one's
kind
of
tricky.
So
there's
no
reason
like
we
couldn't
plum
that
through
it's,
it's
something
that
that
actually
is
expressed
on
on
a
per
endpoint
basis
when
it's
actually
programmed
in
iptables
and
I'm,
pretty
sure
it's
handled
similarly
in
IPPs,
but
that
one's
tricky
because
well
we
could
have
different
session
affinities
for
different
end
points.
There's
like
the
flypaper
effect.
B
B
A
B
A
F
Yep
Jeremy
we've
talked
about
this
at
length.
We
never
really
came
up
with
what
we
thought
was
a
great
answer.
The
thing
that
still
nags
at
my
mind
is
you
know,
with
endpoint
slice,
not
being
GA.
Yet
we've
talked
about
making
changes
to
endpoint
slice
to
try
to
minimize
its
footprint
so
that
it
integrates
better
with
you
know,
managed
services
and
things
like
that.
B
Yeah,
maybe
maybe
this
is
something
that
that
could
be
moved
up
into
endpoint
slice
or
doing
they're
in
place.
Lets
everyone
look
at
it,
so
you
know
we're
just
doing
whatever
input
slice
thinks
it
needs
to
do,
but
yeah
that
that's
probably
a
conversation
that
needs
to
happen
with
sig
network,
as
well
as
with
the
next
one,
topology
keys,
so
service,
topologies
kind
of
in
a
little
bit
of
flux.
Right
now,
I
understand
Tim.
F
F
This
is
something
that
we
have
some
very
rudimentary
research
into
doing
more
automagically,
so
yeah
you
can,
for
this
point,
I
think
you
can
choose
to
either
support
the
Alpha
API
or
just
ignore
it
and
tell
people
don't
run
super
clusters
that
span
high
latency
edges
of
the
graph,
because
we're
not
smart
about
that.
Yet
yeah.
B
I
guess
maybe
a
better
way
to
put
this
would
be
that
on
whenever
we
end
up
with
with
topology,
it
seems
like
the
an
endpoint
should
respect
the
same
topology
rules,
no
matter
where
it
is
so
whenever
whatever
rules
apply
to
it,
an
exporting
cluster,
that's
where
the
surface
owner
is
configuring
it.
It
should
probably
be
imported
to
the
same
rules.
A
B
Well,
I
think
I
think
probably
the
the
next
one
the
so
it's
a
good
time
to
wrap
up
here.
So
we
kind
of
talked
on
a
cid
versus
proto
proto.
Cid
would
probably
be
better
if
we
can
make
it
work,
but
we
need
a
an
extension
mechanism
for
cube
proxy
or
at
least
to
build
our
own
external
queue
proxy.
A
I
I'm,
starting
to
think
that
might
be
a
long,
I,
initially
thought
I
think
I.
Think
in
order
for
that
to
happen,
we
have
to
have
what
we
think
is
a
tractable
way
that
we
could
efficiently
implement
this.
That
would
lead
us
to
a
path
where
we
could.
We
could
eventually
call
it
GA
right,
like
or
v1
supported,
I
feel,
like
the
problem
in
front
of
us
is:
how
do
we
prototype
it?
A
So
it
suggests
like
a
journey
between
the
next
step
and
the
end
of
that
story,
for,
for
my
own
gut
I
feel
like
a
way
to
prototype
it
to
demonstrate,
demonstrate
the
core
value
and
like
have
something
that
works
at
low
scale
is
probably
really
important
to
getting
enough
like
critical
mass
in
the
community
to
get
us
to
a
point
where
we
can
have
buy-in
from
the
stakeholders.
That
would
be
required
to
do
things
like
have
a
plug-in
model
for
the
cube
proxy.
So
that's
probably
the
next
step.
Yeah.
B
A
B
F
It's
fair
to
say
that
the
there
will
be
a
plan.
The
question
is
exactly
how
automatic
it
will
be
versus
how
manual
and
where
that
data
will
be
carried.
Is
the
interesting
bits
for
basically
the
the
conversation
comes
down
to
is:
did
a
controller,
that's
going
to
consume
the
bits
and
tell
you
just
just
use
these
ones,
trust
me,
or
is
it
going
to
be
like
we're
doing
here,
pushing
all
the
information
down
to
each
of
the
clients
and
letting
the
clients
make
a
decision.