►
From YouTube: Kubernetes SIG Multicluster 2021 Mar 16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
C
C
B
All
right,
we'll
just
give
it
a
minute
for
folks
to
trickle
in
I
don't.
I
still
haven't
been
able
to
access
any
trivia
in
my
brain
and
I
always
forget
to
think
of
you
know,
bring
a
little
piece
of
trivia,
but
that's
okay.
We
can
talk
about
the
weather,
it's
rainy
here.
It's
got
that
rainy
day
and
stardew
feeling,
so
I
don't
have
to
water.
My
crops,
nice.
A
A
B
Yeah
definitely.
B
B
All
right,
I
got
three
after
so
welcome
everybody
to
the
march
16th
2021
meeting
of
kubernetes
sig
multi-cluster
laura.
You
are
up
first
with
multi-clustered
dns.
If
I
remember
the
order
of
the
agenda
correctly.
Yes,.
A
Take
it
away
we'll
take
it
away
here
we
go,
let's
do
our
whole
desktop
all
right
cool,
so
basically,
last
week
for
people
who
were
there,
I
briefly
brought
up
that.
I
want
to
get
talking
about
multi-cluster
dns
I've
been
working
on
a
doc
for
it
and
to
move
that
along
some
more
and
also
just
kind
of
bring
everybody
on
board.
On
what
we're
talking
about,
I
made
a
little
1
101
slides,
I
guess
about
multi-cluster
dns,
so
I
thought
I
would
go
through
these.
A
I
don't
know
if
everybody
like
this
is
all
old
news
to
people,
but
hopefully
this
explains
some
stuff
to
to
folks
either
way,
and
as
mentioned
before,
it
was
a
good
way
to
organize
my
thoughts
and
feelings.
A
So
when
on
that,
so
I'm
just
gonna
go
through
a
couple
slides
here
and
describe
what
why
and
what
this
multi-cluster
dns
proposal
is
about
and
then
in
general
my
call
to
action
is
going
to
be
I'd
like
people
to
comment
on
the
doc-
and
you
know
be
thinking
about
it,
so
that
we
can
move
this
along.
A
So
first
point
here
is:
why
am
I
even
talking
about
this-
and
mainly
this
is
a
beta
graduation
blocker
for
mcs
api,
so
we
need
a
detailed
dns
spec
for
multi-cluster
services.
So
that's
what
I'm
trying
to
achieve
here
and
then
also
at
least
one
mcsdns
implementation
is
also
listed.
So
I
mean
logistically
that's
why
and
also
the
current
implementations
right
now
are
extending
cluster
local
dns
and
sort
of
a
generally
agreed
upon
way.
A
I
wasn't
around,
so
I
don't
know
how
everybody
talked
about
it
before,
but
just
briefly,
looking
at
like
the
docs
for
submariner
gke,
there's
some
you
know
commonalities
there
and
it's
also
kind
of
extending
from
what
already
is
in
the
cluster
local
dns
and
what
I
think
is
a
sane
way,
but
we
don't
have
like
a
central
standard.
Yet
so
any
other
implementations
coming
down
the
pipe
or
any
future
extensions.
A
We
don't
have
like
the
written
document
to
send
people
to
so
we
kind
of
need
that
to
get
everybody
on
the
same
page
and
the
current
implementations
might
not
yet
have
full
parity
with
the
current
cluster
local
dns
specification.
So
right
now
there's
this
specification.md
file
in
like
the
dns
repo
for
kubernetes
and
there's
like
all
these
rules
and
these
sections
of
like
which
records
need
to
exist,
and
we
don't
necessarily
have
all
of
those
in
the
current
implementation.
A
So
those
like
serv
records
and
ptr
records
in
the
specification
for
cluster
local
gns
enable
some
other,
like
dns,
related
functionality
that
we
probably
also
want
to
have
for
multi-cluster
services.
So
that's
that's.
Why
we're
talking
about
it
and
basically
for
my
brain,
this
is
organized
in
terms
of
what
dns
records
do
we
need
to
support
and
then
before
I
go
through
this,
the
next
slide
is
what
service
types
do
we
need
to
support?
So
that's
that's.
A
Where
kind
of
all
this
proposal
is
coming
from,
so
maybe
everybody
already
knows
this,
but
I
was
kind
of
new
to
all
of
this.
The
dns
records
that
we
need
to
support
to
get
parity
with
the
current
existing
specification
are
the
a
quad,
a
records,
serv
records
and
ptr
records.
These
a
quad
a
records
are
one
dns
name
to
one
ip
address.
This
is
kind
of
like
the
most
normally.
A
I
guess
thing
so
an
example
would
be
like
myservice.test.svc.clusterset.local
and
that
matches
to
this
ip
address,
but
then
these
other
ones
that
I
don't
think
necessarily
exist
anywhere
in
any
mcs
implementations,
yet
are
the
serv
records
and
the
ptr
records.
So
the
serv
records
are
look
like
this,
so
they
have
this
weight
and
priority
value
this
metadata
encoded
into
the
response
and
then
they're
also
in
this
specific
format
that
takes
into
account
protocol
port
information
to
get
these
records
back.
A
So
this
can
be
used
by
clients
to
load
balance
for
certain
ports
and
protocols
is
sort
of
the
dns.
You
know
feature
that
we
get
for
this
and
then
the
pointer
records
are
this
specifically
formatted,
dns
name
this
like
ip
address,
looking
thing
dot
in
adder
thing.
This
should
match
up
with
the
human
readable
dns
on
the
other
side,
and
this
is
used
by
dns
clients
for
reverse
dns
lookup.
So
that's
why
these
two
things
are
like
a
thing
that
we
should
do
not
just
to
have
parody.
D
Good
morning
so
small
correction
on
the
a.
D
For
headless
services,
it's
one
name
to
n.
I
p
addresses
true.
A
D
Right
I,
the
easy
one
is:
is
ptr
records?
There
can
only
be
one
ptr
record
for
an
ip
address.
Like
that's,
that's
the
rule.
D
Well,
actually,
you
can
install
more
than
one,
but
the
dns
spec
says
don't
so
I
don't
think
we
can
handle
ptr
for
this
honestly
handling
ptr
at
the
service
level
was
a
mistake
and
we
should
not
repeat
that,
but
even
if
we
wanted
to
we
can't
so
I
think
you
can
strike
ptr
from
the
plan
entirely,
unless
somebody
has
a
really
good
reason
why
we
want
to
jump
through
hoops
to
make
the
reverse
lookup
of
an
ip
address
turn
into
the
multi-cluster
name
instead
of
the
local
name,
given
how
we're
implementing,
with
like
thunking
through
real
services,
you're
already
getting
a
ptr
record
for
that
ip
address.
D
I
know
some
people
use
them.
I
wonder
how
important
it
is
in
this
case.
Maybe
it's
a
place
where
we
need
to
actually
like
reach
back
out
to
cygnet
and
see
how
widely
they're
being
used,
whether
we
want
to
carry
them
forward
and
whether
there's
any
changes,
I'm
not
super
familiar
with
serve
records
myself.
D
E
E
E
E
D
I
mean
one
of
the
one
of
the
core
principles
of
doing
networking.
The
way
kubernetes
did
networking
was
so
that
clients
could
do
that,
because
that's
what
clients
wanted
to
do.
They
want
to
assume
that
https
like
this
is
highlighting
right.
Https
is
443,
and
if
it's
not
443,
you
should
probably
know
that
and
so
no
reasonable
client
that
I
know
of
is
going
to
go.
Do
a
lookup
for
under
bar
htts,
under
bar
tcp
dot
whatever
and
then
pull
the
port
and
then
go
look
that
up
they're
just
saying
https
means
443.
E
D
It's
I
think,
it's
an
attractive
nuisance.
I
think
it's
it's.
You
look
at
it
and
you're
like.
Oh
that's
interesting.
Let
me
use
that
and
then
it
turns
out
like
most
dns
clients,
don't
have
any
real
support
for
serve
records.
D
Now
I
do,
I
do
believe
it's
being
used
in
cases
like
voip
and
sip,
where
they
use
not
the
they
use
the
port,
but
they
mostly
they
want
to
use
the
weights
and
signet
has
some
open
issues
to
actually
allow
programmable
weights,
which
is
something
that
serve
offers
that
regular
records
don't.
E
C
I
think,
thanks
along
the
lines
of
that
question,
though,
if
we're
not
going
to
do
server
records
for
now
a
nice
little
blurb
on.
Why
and
and
and
explicitly
stating
that,
like
we're,
not
doing
this,
because
if,
if
https
is
not
443,
you
should
know
that
should
be
something
that
I
think
we
probably
write
down.
D
C
F
Yeah
sorry
yeah,
I'm
just
wondering
if
so
tying
it
into
the
topic
I
have
on
the
engine
as
well,
whether
how
useful
serve
records
would
be
for
weighted
load,
balancing
or,
if
that's,
not
really
using
them
for
what
they're
intended
for,
because
the
the
way
I
remember
is
this.
This
is
more
about
auto
discovery,
so
they
tend
not
to
be
seen
much
in
enterprisey
network
situations
apart
from
voip,
but
on
home
networks.
F
You
get
them
for
printers
and
well,
you
see
them
quite
a
bit
for
ssh
on
some
network
equipment
stuff
like
that,
but
they
they
it
seems
to
me
they
could
be
useful.
Just
for
the
weighted
balancing
aspect
of
multi-cluster
services.
D
A
D
D
I
still
don't
know
how
useful
it
is
because
traffic
doesn't
generally
come
from
that
ip
address,
and
so
you
know
doing
reverse
lookup
on
incoming
isn't
a
useful
concept.
I
think
the
thing
to
say
here
is
ip
addresses
should
have
a
ptr
record.
If
you,
if
we
are
using
the
phony
in
cluster
service
to
to
front
end
the
endpoint
slices
for
a
multi-cluster
service,
then
it
already
has
a
ptr
record
if
you're
using.
D
If
your
implementation
wants
to
use
a
different
ip
address
like
you
want
to
manage
a
central
pool
just
for
multi-cluster
services,
which
I
think
is
valid
implementation
right,
then
you
probably
want
a
reverse
lookup
for
those
ip
addresses.
A
Yes,
I
think
you're
saying
in
the
implementations
to
date
that
use
this
dummy
service.
Then
we
already
got
this
for
free,
but
there
probably
should
have
been
be
some
note.
If
that
this
should
exist,
one
should
exist
for
every
eip
address
and
that
may
catch
any
implementations
where
they're
not
using
a
dummy
service.
You
mentioned,
like
some
central
pool
of
ips
somewhere
else,
that
they're
pulling
from
to
get
the
super
cluster
ip
or
cluster
set
ip.
D
C
And
I
think,
as
an
extension
of
that
too,
if
you
are
using
the
dummy
setup
that
we
have
in
in
the
mcs
api
today,
then
the
the
record
that
you
get
is
not
this
record
right.
It
is
for
the
double.
D
C
So
so,
basically,
if
we,
if
we
stick
with
that
model,
then
we
we
can't
be
more
specific
than
it
should
anyway,.
A
Cool
all
right,
I'm
going
to
the
next
slide
all
right.
So
I
think
these
are
the
service
types
we
need
to
support
cluster
set
ip
headless.
This
is
kind
of
like
a
little
preview,
and
I
have
some
other
diagrams
later,
but
like
this
little,
this
blue
pod
behind
the
blue
service
should
be
sorry.
I
need
to
move
all
your
videos
behind
blue.test.svc.cluster
or
sorry.
A
This
is
an
example
of
what
these
are
like
today,
yeah
single
cluster,
so
blue.test.sbc.cluster.local
will
get
you
to
this
bluepod
yellow.test.sbc.cluster.local
can
get
you
to
these
two
pods
for
a
cluster
ip
service
for
headless
type
services,
you'll
get
stuff
like
blue.test.sbc.cluster.local
or
if
you
have
hostnames
on
your
pods
you'll
get
stuff
like
one
dot
yellow.test.scc.local
to
just
get
us
to
this
one
guy
or
yellow
too
blah
blah
blah
blah
to
just
get
us
to
this
other
guy.
A
Okay,
I
have
the
a
few
slides
where
there's
a
lot
of
words,
which
is
kind
of
a
bullet
point
version
of
what
is
in
the
spec
and
then
just
to
show
you
where
this
is
going.
There's
also
these
diagrams,
which
is
a
version
of
this
too,
so
hopefully
for
some
people.
This
might
be
a
little
bit
easier
to
follow.
But
the
point
I
want
to
get
across
here
is
that
is
what
I'm
suggesting
for
cluster
set
ip
services
and
then
same
thing,
what
I'm
suggesting
for
headless
services
and
what
those
look
like.
A
Basically,
if
we
live
in
a
world
today
where
single
cluster
dns
gets
things
like
service.namespace.scc.cluster.local
to
try
and
route
to
one
of
potentially
many
pod
backends
that
match
a
services
selector,
then
what
we
want
in
the
cluster
set
case
is
service.ns.svc.clusterset.local
and
the
magic
is
that
this
will
now
be
able
to
route
to
any
matching
pods
and
the
whole
cluster
set.
So
this
is
like
kind
of
the
basis
of
mcs,
but
no
super
major
changes
in
terms
of
like
the
structure
of
this
dns.
A
Besides
the
zone
being
differently,
that's
saying:
hey,
you
could
go
bump
out
to
like
somebody
else
in
the
cluster
set
like
a
yellow
pod,
that's
like
over
in
cluster
b
instead
of
cluster
a
so
there's
a
lot
of
words
for
that.
But
those
that's
the
main
idea
here
and
the
picture
version
is
that
if
we
had
two
clusters
now
that
have
pods
that
match
our
service
color.
So
I've
got
my
blue
pods
and
I
got
my
yellow
pods.
D
So
I've
been
vocal
before
about
my
distaste
for
the
svc
syntax
that
we
dreamed
up
for
single
cluster,
but
I
also
think
that
being
different
here
probably
is
more
harmful
than
valuable.
So
I
wave
that
previously
vocal
objection.
D
I
want
to
ask,
though,
we've
always
said
from
the
beginning
that
the
single
cluster
dns
suffix
cluster.local
was
optional.
I
mean
it
was,
was
a
suggestion
and
that
people
can
change
it
most
people,
don't
some
providers,
don't
even
allow
you
to
change
it?
What
is
our
feeling
on
this?
Do
we
take
the
same
stance?
Do
we
require
it
be
cluster
set,
not
local.
I
have
for
context.
I've
been
chided
that
this
is
not
what
local
is
for,
and
we
should
stop
doing
that.
A
Just
from
the
perspective
of
the
doc
proposal,
it's
pretty
wishy-washy.
It's
like
you
could
configure
this,
but
you
probably
didn't
which
is
matching
some
language.
That's
on
the
mcs
api
dock
right
now,
but
definitely
opening
the
floor.
If
we
want
to
change
that
in
either
or
both
places.
C
E
D
I
think
that
is
what
I'm
also
hearing,
and
I
I
like
the
guidance.
If
you
wanted
a
different
name,
go
make
aliases.
It
doesn't
work
super
well
when
things
are
dynamic
like
headless.
B
D
E
Okay,
so
there's
going
to
be
an
endpoint
list
associated
with
the
cluster
set
ip.
D
A
You,
okay,
all
right,
so
we're
trending,
don't
configure
cluster
set.local
away
your
cluster
side
zone
I'll
definitely
make
the
change
in
the
dock,
and
then
I
know
there's
language
and
mcs
api
cap
right
now
that
talks
about
it
being
configurable.
So
I
don't
know
I
I
can
just
open
a
pr
against
that.
I
don't
know
what
the
process
is
for
that
one,
because
it's
further
along.
A
D
A
A
Headless
services
have
more
going
on
with
them,
which
is
that
they
get
dns
records,
one
for
getting
back
all
the
ips
for
every
pod
back
in
the
service
selector,
because
the
whole
point
is
to
give
the
consumer
the
ability
to
choose
which
one
it
wants
to
head
to
itself.
Instead
of
letting
kubernetes
choose
for
it
and
then
also
one
disambiguated
dns
name,
you
need
to
move
your
videos
again.
I
can't
read
per
per
pod
back
end
whoops,
which
basically
uses
the
pod's
host
name
to
disambiguate,
so
the
service
at
nms.svc.cluster.local.
A
This
is
where
you
get
the
ips
for
every
pod,
so
you
get.
You
know
one
plus
ips.
If
you
have
more
than
one
pod.
This
is
kind
of
the
normal
case
for
headless
and
from
my
understanding
and
then
something
like
web
dash
one.
If
this
is
the
host
name
for
one
of
those
pods
behind
the
headless
service,
then
you
get
the
ip
for
just
the
pod.
That
has
this
host
name
web
dash
one
and
then
all
the
rest
of
this
is
the
same.
A
Hostname.Clusterid.Service.Ns.Sbc.Clustersite.Local
so
picture
time
looks
like
this:
I'm
gonna
move
your
videos
again
whoops,
okay,
so
the
idea
here
is
that
you
could
still
have
these
general
ones
and
get
back
all
the
ip
addresses
so
blue.test.svc.clusterset.local
in
the
headless
case.
You
get
back
the
ip
for
this
one
and
this
one
and
this
one
and
this
one
and
this
one
instead
of
just
redirect
through
the
cluster
set
ip.
A
You
get
each
of
these
individually
and
in
both
clusters,
because
we're
multi-clustering
people
and
then,
if
you
want
a
disambiguated
one,
then
if
you
want
to
get
to
just
this
blue
line
over
in
cluster
a
you
need
both
blue
dash
one
and
then
you
also
need
cluster
a
or
whatever
the
name
of
that
cluster
is
dot
blue
dot
test
dot.
So
you
see
that
clusters
that
dot
local
blah
blah
blah
this
suffix
is
kind
of
the
same.
A
So
this
blue
dot
dash
one
cluster
a
gets
to
there
blue
dash
two
dot
cluster
b
gets
over
here
blue
dash.
Three
cluster
b
goes
over
here
right
and
then
the
same
thing
could
be
said
for
all
of
these
yellows.
So
I'm
not
really
showcasing
it.
In
this
example
actually,
but
imagine
if
there
was
a
yellow
called
two
over
here
and
a
yellow
called
two
over
here
as
well
like
they
both
had
the
same
hostname.
A
D
Well,
I
mean
it's,
it
is
still
a
headless
service
in
that
there's
no
question
for
it
right.
It's
just
it's
not
a
multi-cluster
service,
you're,
saying
it's
a
single
cluster,
so
the
question
I
guess
is:
should
we
define
that
no
prefix
cluster,
a
dot
blue
dot,
whatever
resolves
to
all
of
the
records,
while
each
individual
name
resolves
to
the
individual
records
which
would
be
the
same
shape
as
a
single
cluster,
but
with
one
extra
a
token
in
there.
D
C
My
so
my
take
from
from
talking
with
folks
on
this
is
well,
it
certainly
seems
intuitive.
You
know
you,
you
remove
a
label
and-
and
you
go
the
next
level
up
it's
great,
but
it
doesn't
actually
seem
like
anyone
needs
that
and
so
do.
C
I
guess
it
does
seem
like
more
work
and
more
requirements
in
the
implementation.
If
nobody's
actually
got
a
use
case,
other
than
seems
like
it
makes
sense.
Maybe
we
shouldn't
start
there,
I'm
curious.
If
anyone
yeah
knows
of
any
actual
use
cases.
B
F
Yeah
we
have
some
use
cases
for
basically
sort
of
multi-clustering
non-cloud
native
workloads
that
already
have
their
own
idea
of
what
a
cluster
is
so
typically
replicating
databases,
for
example,
where
they
need
to
be
able
to
address
the
actual
individual
posts
that
constitute
the
cluster.
F
So
there
is
helpful
to
have
a
way
of
addressing
individual
components
and
there's
also.
We
also
have
some
cases
where
some
traffic
needs
to
go
to
a
specific
cluster.
We
don't
care
where,
in
that
cluster,
but
cluster,
a
dot
blue
without
blue
one.
C
D
C
F
C
Cluster,
a
blue
and
cluster
b
blue
is
actually
the
service
name
yeah
for
those
services,
because
I
I
actually
care
which
cluster
they're
in
yep.
F
Yeah
or
it
has
to
be
covered
by
other
concepts
that
we
don't
really
have
yet
like
topology
information
on
top
of
the
mcs.
F
Yeah
so
well,
that's
part
of
the
what
comes
up
next,
so
yeah
being
able
to
say
I
want
to
talk
to
this
specific
service,
but
in
a
different
availability
zone
from
me,
or
I
want
to
talk
to
a
service
specifically
in
the
same
availability
zone
or
in
a
different
different.
Well,
basically,
any
topology
item,
but
that
doesn't
really
fit
in
with
dns.
D
Yeah
there's
some
work
going
on
around
automatically
doing
topology,
but
it's
doing
the
opposite
of
what
you
just
said,
which
is
prefer
to
talk
to
the
things
in
my
same
zone
when
possible
yeah.
D
So
if
we
want
to
provide
some
way
for
explicitly
targeting
individual
clusters
by
name,
then
we
will
have
to
build
some
concept
of
the
cluster
identifier
into
the
name
which
we
have
for
headless.
But
we
do
not
have
for
a
headful
right
and
my
my
concern
here
is:
if
we're
going
to
do
it
for
one,
we
probably
need
to
do
it
for
both,
which
would
mean
we're
thinking
about
extending
the
the
vip
case.
Further.
C
Or
we
have
that
already
in
the
form
of
like
actually
making
the
service
name
in
like
named
per
cluster,
so
we
just
actually
have
different
services.
A
A
E
A
Put
human
readable
information
in
it.
That
is,
I
feel,
what
kind
of
led
people
to
think
that,
oh,
I
could
you
know
encode
that
information.
Basically
in
my
cluster
name
and
then
that
should
show
up
here
and
then
I
think
what
jeremy
you're
saying
is
if
they
were
already
encoding
that
human
readable
information
somewhere
in
this
whole
line
of
things,
they
could
put
it
in
the
service
name.
C
C
I,
like
you,
know
I
I
think
in
the
near
term,
like
even
if
we
included
the
the
cluster
name
and-
and
we
gave
you
that
that
cluster
level
addressing
built
in
first
of
all,
I
think
it
might
encourage
people
to
use
a
single
service
when
they
really
should
be
using
two,
but
it
also
doesn't
seem
like
it
actually
addresses
the
separate
availability
zone
unless
we,
unless
the
user
then
takes
on
making
sure
that
the
clusters
are
each
in
a
different
availability
zone
as
well.
C
So
if
you
wanted,
like
you,
know
a
europe
address
and
a
us
address,
that
seems
like
another
thing
as
well
and
and
all
of
it
seems
to
me
at
least
like
it
would
be
easier
to
just
say
you
know,
put
it
on
the
user
include
that
in
your
service
name
and
export
these
as
different
services,
and
then
you
have
all
the
knobs.
F
Yeah,
especially
since
it
feels
really
like
a
workaround
for
different
deficiencies
in
our
ability
to
describe
services
and
features
that
we
desire
of
the
services
we
want
to
talk
to,
and
so
not
encoding.
A
work
around
inspect
seems
like
a
good
idea.
A
Okay,
so
I'm
hearing,
we
don't
want
a
separate
record
option
for
drop
the
hostname,
just
cluster
dot,
blah
blah
blah
blah
which
could
get
to
whoops.
For
example
like
just
these
guys
over
here
or
just
these
guys
over
there
and
I'll
put
in
some
example,
like
some
of
the
examples
conversation
from
this
to
motivate
that
in
the
proposal.
D
A
We
brought
up
a
couple
other
things
that
are
not
listed
here,
but
these
are
some
possible
problems.
I've
heard
about
so
just
seating,
everybody's
brain,
and
we
can
discuss
it
now
and
or
on
a
proposal
but
yeah
basically
for
headless
services,
I
mean
overall,
the
idea
is
to
make
sure
the
disambiguation
is
good
enough.
We
kind
of
were
just
talking
about
that,
and
it
sounds
like
a
particular
case
where
this
has
to
really
be
good.
Enough
is
for
stateful
sets
because
they
have
state
and
right
now.
A
Staple
sets
make
special
pod
host
names
of
this
form.
Staple
set
name
dash,
ordinal
like
earlier
on
the
examples
when
I
said
like
web
one,
that's
kind
of
an
borrowing
from
that
example,
which
is
sticky
to
the
pod
so
between
these
unique
host
names
and
then
cluster
id.
The
idea
is
that
this
is
disambiguated
enough
for
headless
services.
A
One
thing
that
has
been
brought
up
to
me
is:
if
a
pod's
idea
idea
of
its
fully
qualified
domain
name
is
only
the
cluster
local
one
is
stuff,
gonna
get
weird
if
that
pod
is
self-reporting.
For
some
reason,
it's
a
non-not
disambiguated
enough
name
because
it
is
has
only
its
cluster
local
dns
for
our
multi-cluster
cases-
and
you
know
staple
sets-
might
be
the
drama
because
I
think
it
might
be
the
most
dramatic
one.
But
this
could
be
the
case
for
any
headless
service.
D
So
we
know
through
cygnet
that
there
are
a
handful
of
cases
where
applications
look
at
their
own
host
name
or
their
their
own
fqdns,
as
defined
by
hostname,
dash,
f
or
sometimes
just
by
hostname,
thanks
guys,
thanks
redhat,
if
if
that
is
a
problem
for
them
for
those
apps
like
they're
using
kerberos
for
example,
then
this
will
be
a
problem
if
they
need
to
get
multi-cluster
resolution,
there
simply
isn't
a
way
to
to
say
use
the
multi-cluster
suffix
instead,
because
cubelet
doesn't
know
about
that.
D
There
is
an
open
proposal
to
allow
pod
authors
to
specify
their
own
fqdn,
that
that
is
only
seen
to
them
right,
not
in
dns,
but
is
only
seen
to
them,
so
that
at
least
gives
them
the
ability
to
take
the
reins
here
when
they
need
to.
So
I
think
that
that
will
be
enough
to
get
over
the
hurdles
for
now
and
we
can
figure
out
if
they're
again,
if
there's
some
more
systemic
problems,
tldr,
I
think
we're
okay.
A
D
Happy
I'm
happy
to
point
you
to
it.
We
have
a
handful
of
customers
who
are
users
who
are
just
saying
look.
Why
do
you
care
what
my
host
name
is?
Just
let
me
set
whatever
I
want,
and
you
know
they're
right.
D
A
Cool,
so
I'm
taking
some
notes,
probably
okay.
I
have
some
other
feelers
out
on
this
topic
too,
but
I'll
follow
along
that
thread
as
well.
A
Okay,
so
that
is
my
story.
This
is
basically
the
this.
Is
the
presentation
version
of
this
stock
again,
so
please
take
a
look
at
this
doc
and
comment
as
you
as
you
see
fit.
I'm
also
going
to
incorporate
some
of
this
feedback
here
as
well,
particularly
about
the.
A
And
also
the
other
big
edit
I
have
in
my
notes-
is
that
scrolling
up
that
cluster
set
zone
should
not
be
configurable
until
and
unless
we
decide
later
that
it
should
be,
and
then
I
think
I'll
put
some
comments
here,
but
it
sounds
like
we're
kind
of
anti-ptr
records
or
I
guess
like
we
get
them
a
different
way.
We
expect
to
get
them
a
different
way,
so
I'll
put
in
the
information
about
that
and
then
serve
records
is
still
tbd.
I'm
going
to
talk
to
signet
about
this
cool.
B
Okay,
that
was
that
was
a
great
presentation.
I
believe
steven
that
you
have
something
on
the
agenda
next,
why
don't
you
go
ahead
and
take
it.
F
Yep
thanks,
so
this
is
just
really
dipping
the
toe
dipping
toes
to
see
if
it's
worth
discussing
further.
F
In
this
context,
do
we
have
a
proposal
in
submariner
to
add
weighted
load,
balancing
of
services
across
a
cluster
set,
and
we've
discussed
this
here
in
the
past
already
the
idea
is
to-
and
this
is
all
implementation
detail
for
the
a
controller
somewhere
to
figure
out
how
costly
it
is
to
talk
to
services
in
different
clusters
and
to
give
the
right
target
back
to
requesters.
F
Based
on
that-
and
we
said
in
the
past
that
ideally,
what
we
want
is
for
just
the
right
thing
to
happen
all
the
time
and
I
think
that's
what
the
the
goal
should
be.
F
And
this
is
just
sort
of
asking
the
general
audience
here
if
there's
interest
in
more
academic
discussion
of
that,
in
which
case
I'd,
go
off
and
prepare
a
slide
deck
with
a
lot
more
detail
or
if
we
or
the
collective
wisdom
of
the
group
is
that
the
goal
should
really
be.
F
Do
the
right
thing
as
far
as
mcs
is
concerned
and
push
push
problems
out
to
other
groups,
for
example,
to
determine
what
the
financial
cost
of
a
given
connection
is
or
ask
the
user
to
express
or
well
no
try
not
to
ask
the
user
to
express
performance
characteristics
and
all
that
sort
of
thing,
and
the
only
thing
I
can
see
currently
and
the
proposal
we
have
that
we'd
want
to
ask
the
user
is
whether
they
want
it
cheaper,
fast
or
some
magic
combination
of
the
two,
and
that
tends
to
be
meaningless
in
the
end
anyway.
E
D
And
we
have
in
21,
which
is
due
in
weeks
a
new
change
to
add
something
called
endpoint
hints
topology,
hints
sorry
to
end
points
where
every
endpoint
in
a
slice
can
include
some
hints
about
topology
information
that
was
decided
by
whomever
created
the
endpoint
slice.
So
in
the
single
cluster
case
that
would
be
the
in
cluster
endpoints
controller.
In
the
mcs
case,
it
would
be
whatever
your
mcs
implementation
is,
which
is
supposed
to
be
a
something
that
the
the
proxy
subsystem
can
then
consume
to
make
topology
smart
decisions.
D
So,
for
example,
I
can
say
this
end
point:
I've
decided
this
endpoint
is
good
for
zones
a
and
b,
and
that
endpoint
is
good
for
zones
c
and
d
and
the
decision
there
is
pretty
coarse.
It's
it's
really
about
topology,
without
a
concept
of
cost,
on
the
assumption
that
there's
zero
cost
within
a
zone,
non-zero
cost
between
zones,
and
so
we'll
try
to
optimize.
For
that.
So
I
wonder
if
that
is
a
good
enough
starting
point
to
figure
out
the
question
that
you're
asking
stephen
or
if
you
really
need
more,
like
cost-based
information.
F
Yeah,
it
sounds
like
a
very
good
starting
point
yeah
and
I'm
not
sure
what
the
cost
aspects
would
be
like.
I
think
in
most
cases,
yeah,
like
you
say
it's
zero
within
the
same
availability
zone
and
something
outside
and
probably
fairly
similar,
or
we
don't
care
once
we
decide
to
incur
some
cost,
how
much
that
cost
is
actually
going
to
be.
You
know
comparing
zone
a
to
zone
b,
for
example
from
zone
c
yeah.
Yes,
that's
that's,
definitely
very
helpful.
C
Thanks
tim
tim
said
everything
I
was
gonna
say.
I
think
that,
regardless
of
I,
I
think
this
is
a
conversation
that
should
involve
sig
network
regardless.
I
don't,
I
don't
think
we
should
try
to
design
the
optimal
flow
from
the
mcs
standpoint.
C
Only
I
think,
if
I
think,
starting
with
what
they've
been
working
on
is,
is
probably
good
and
then,
if,
if
we
were
to
determine
that
that
it
didn't
actually
meet
our
needs,
then
working
starting
with
cygnet
on
meeting
those
needs
is
probably
going
to
end
better
than
if
we,
if
we
try
to
come
up
with
a
list
of
requirements.
B
Was
that
all
you
needed
stephen
yeah
for
the
time
being?
Okay?
Well,
I
think
we're
about
at
time,
and
I
think
that
was
the
last
thing
on
the
agenda.
So
thank
you.
Everybody
for
joining
and
we'll
see
you
next
week
have
a
great
rest.
Your
day
and
a
great
week.