►
From YouTube: Kubernetes SIG Multicluster Jan 24 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
I
forget
who
I
was
talking
to
if
it
was
you
Nicholas,
maybe
but
I
haven't
not
that
I'm
like
really
hardcore
about
New,
Year's
resolutions
or
anything
but
I,
like
kind
of
like
to
think
about
it,
and
then
I
haven't
even
done
that
for
this
year
yet
and
January
is
almost
over.
So
as
far
as
I'm
concerned,
it's
still
2022.
D
D
I
think
we're
we're
probably
good.
So
thanks
everyone
for
joining
us
today
at
the
Tuesday
January
24th
Sig
multi-cluster
meeting
Laura.
You
have
a
con.
B
Hey
okay,
I
wanted
to
give
a
few
updates
about
some
various
initiatives
going
on
and
then
I
wanted
to
ask
two
questions
about
MCS
on
trying
to
get
the
act
together
here,
especially
for
voluntary
API
review.
So
I
just
want
to
make
sure
I
have
all
my
ducks
in
a
row
for
that.
So
that
might
be
a
little
bit
discussiony,
but
first
maybe
I'll
start
from
the
top
and
go
down
because
the
first
ones
are
more
kind
of
updating.
B
So
first
off
I
wanted
to
talk
a
little
bit
about
the
sigmc
website.
So
talked
a
little
bit
about
this
a
couple
meetings
last
year
end
of
last
year,
but
Nicholas
has
been
helping
me
and
Mike
I
think
also
produced
some
content
as
well
for
this
site
that
we're
working
on
kind
of
in
the
spirit
of
Gateway
API
site
to
just
get
some
more
high-level
documentation
and
Outreach
for
the
initiatives
of
Sig
multi-cluster.
So
I
did
actually
lgtm
ticket
to
create
the
official
repo.
B
So
we're
about
ready
to
migrate
over
the
work
that
we've
been
doing
now
into
something
that
will
be
like
we
can
properly
DNS
against,
like
sigsaw,
Kate's,
IO
and
stuff.
The
latest
preview
is
still
visible
here.
You'll
see,
there's
some
holes,
one
is
for
the
home
page
so
hard
to
write,
but
a
couple
updates.
B
Besides
some
of
the
new
content,
that's
in
here
is
that
we
are
going
to
use
basically
the
like
blog
or
we
maybe
we'll
change
it
to
the
word
announcements
to
kind
of
have
some
of
the
big
stuff
that
goes
out
on
the
mailing
list
also
covered
in
this
format
too.
So
it's
a
little
bit
easier
for
people
to
discover
so
we've
straight
up.
B
You
know
just
copy
and
pasted
what
was
in
the
the
mailing
list
announcement
regarding
Cube
fed
in
this
case,
but
yeah
okay
Nicholas,
has
even
more
updated
information
which
is
great
the
this.
So
this
is
our
our
real
for
sure,
repo
kubernetes-6
Sig
multi-cluster
site.
The
repo
is
here
so
now
it's
a
matter
of
us
migrating
that
over
into
here,
and
especially
in
the
format
that
we
wanted.
So
to
that
point,
I'd
love
for
anybody's.
B
You
know
eyes
on
some
of
the
content
here,
for
you
know
how
to
say
like
for
information
like
sort
of
content
architecture,
if
it
makes
sense
to
you
and
if
the
content
is
at
the
right
level
of
specificity
Etc,
there
is
a
list
of
known
sort
of
cleanups
still
to
do,
which
is
being
tracked
in
here,
because
me
and
Nicholas
are
working
really
closely
together
on
this
right
now
and
before
it
had
like
an
official
repo.
B
But
if
anybody
you
know
is
interested
in
collaborating
on
this
definitely
feel
free
to
reach
out.
If
nothing
else,
your
editorial
eye
is
appreciated.
On
the
proposal
site,
this
one
Laura
Lorenz
GitHub,
slash
blah
blah
proposal,
and
then
we
I
consider
it
blocking
to
make
sure
that
the
home
page
is
done
and
like
any
little
Tiddly
bits
we
don't
really
want
to
be.
There
are
hidden
that
aren't
like
finished
before
we
migrate
the
repo
over
so
that
when
it's
on
the
official
you
know
site.
A
Just
just
to
make
show
that
everything
is
up
to
date,
so
I
saw
that
there
was
an
official
calendar
that
has
been
posted,
so
I
included
that
in
a
PR
that
you
probably
merged
tomorrow
and
I,
also
yeah
I've
done.
The
introduction
webpage
yesterday.
I,
oh
great,
with
the
pr
also
on
this.
So
we'll
probably
review
that
tomorrow.
So
just
wait
a
couple
of
days,
I
guess
for
everyone
to
to
review
stuff,
and
you
know
and
we'll
move
forward
with
with
the
rest,
I
guess:
cool.
B
All
right,
awesome
I,
will
merge
those
changes
after
this
meeting,
so
for
people
who
want
to
people
who
haven't
already
read
it
a
couple
times
want
to
give
it
a
look
and
see
how
it
all
looks
in
DM,
either
of
us
or
post
in
the
slack.
B
If
you
see
something
missing
or
comment
on
the
the
dock
that
has
like
all
the
to-do's
and
proposal,
any
of
the
send
a
send
a
pigeon,
you
know
any
anything
will
do
in
terms
of
just
letting
us
know
how
we
can
make
this
look
all
spiffy
before
we
migrate
it
over.
So
that's
what's
going
on
there.
B
Cool
okay
next
topic,
also
sometime
end
of
last
year,
but
not
for
a
couple
meetings.
We
were
talking
about
the
MCS
end-to-end
tests
and
the
well
for
a
while,
at
the
end
of
last
year,
we're
talking
about
end-to-head
test
on
the
first
hand,
because
they
are
a
blocker
to
moving
MCS
into
beta
and,
on
the
other
hand,
because
there's
interest
among
the
Sig
to
extend
the
testing
concept,
not
necessarily
for
the
graduation
requirements,
but
to
make
it
easier
for
implementers
to
know
if
their
implementation
is
conforming
to
the
Upstream
standard
or
not.
B
And
so
we
had
a
couple
side
meetings.
We
determined
a
while
back
to
kind
of
take
that
offline
and
talk
about
it,
and
we
have
this
dock,
which
is
pretty
detailed
now
about
what
we
want
to
do.
So
if
people
are
interested,
definitely
give
it
a
read,
but
in
short,
as
kind
of
mentioned
before
too,
the
idea
is
to
give
something
that
implementers
can
use
to
determine,
am
I
meeting
the
criteria
of
the
Upstream
spec,
and
today
we
don't
feel
like
the
current
end-to-end
tests
are
really
do
that
very
well.
B
There
are
a
couple
guiding
principles
in
here
regarding
the
end
user
experience
like
how
to
make
those
test
failures,
contextualize
to
where
in
the
spec,
you
are
deviating
and
making
certain
types
of
things
easily
configurable,
so
you
can
test
this
at
different
levels
of
scale
and
timeouts
that
make
sense
for
your
implementation.
B
B
The
current
step
we're
trying
to
we're
working
on
is
the
basically
POC
phase.
So
in
a
world
where
we
have
a
larger
system
where
we
can
expose
these
messages
that
are
contextualized
to
the
spec
and
provide
this
level
of
parameterization
and,
most
importantly,
support
a
matrix
of
different
cases,
because
some
one
thing
we
discovered
when
we
were
talking
about
this
was
that
you
know
we
want
two
plus
clusters.
B
Some
you
know
where
n
can
be
a
large
number
to
check
headless
where
local
service
exists
and
exported
services
are
available
not
in
every
cluster
right.
So
that's
like
one
path,
but
then
like
non-headless,
where
local
service
exists
and
it's
available
in
the
exporting
cluster
and
all
these
types
of
combos.
We
want
to
make
see
if
we
can
find
a
way
to
make
something
that
puts
that
makes
this
easier.
I
see
people
are
talking
about,
maybe
not
hearing
me,
but
some
people
still
can't
hear
me.
B
Sorry
distract
by
chat
so
yeah
that
that's
the
sort
of
sophistication
level
that
we
want
to
get
these
tests.
B
This,
which
is
kind
of
separate
from
the
end-to-end
tests
and
they're,
inspired
by
but
kind
of
a
different
model
and
a
different
end
user
than
the
end
ten
test
that
the
cap
graduation
requirements
are
tied
to
right
now,
so
Stephen,
Kitt
and
I
are
working
on
some
pocs
for
this
and
coming
together
on
that,
hopefully,
before
the
next
meeting,
we
have
something
tentatively
on
the
books
for
next
week
to
sync
up
but
I
wanted
to
give
the
update
and
make
sure
everybody
knew
like
what,
where
we're
going,
what
the
direction
is
for
these
tests
and,
ultimately
again,
the
point
is
to
make
it
so
that
every
implementation
like
Submariner
gke
AWS,
can
run
against
this
test
suite
and
get
really
detailed
feedback.
B
Or
you
know
the
best
feedback
we
can
give
about
where
the
gaps
are
for
the
spec.
So
yeah
that
take
a
take
a
look
at
this
doc.
If
you're
interested
and
again
me
and
Steven
are
in
our
Corners
thinking
about
how
to
make
this
work
from
an
implementation
perspective
that
doesn't
become
too
burdensome
to
make
all
of
those
different
combinations
of
tests
that
we
want
to
support.
B
But
yeah
feel
free
to
contact
if
you
have
some
ideas
or
interest
in
this
project.
Interested
in
this
type
of
composable
testing
framework
would
love
to
hear
about
any
prior
art
as
well.
E
Cool
I
think
it's
great
to
see
this
happening
and
yeah
that
kind
of
once
the
framework
itself
is
in
place
and
there's
like
a
list
of
tests
that
we
want
written
in
Gateway,
we
got
a
vacuum:
that's
a
great
accessible
on-ramp
for
new
contributors
to
be
able
to
write
performance
tests,
so
yeah.
B
Totally
and
I
think
there's
like
another
meta
level
later
of
the
conformance
slash
like
integration
of
how
to
say
previously.
In
this
meeting,
we've
discussed
how,
when
you
want
to
do
multi-cluster
with
kubernetes
there's
like
now,
a
growing
list
of
crds
and
other
tooling,
you
put
together
Gateway,
API
and
MCS
being
examples,
and
they
even
have
their
like
own
potential
Integrations,
with
each
other
so
being
able
to
compose
something.
On
top
of
all
of
that,
I
feel
is
like
going
to
be
a
big
problem
for
us.
B
B
Yeah
so
anyways
again
any
prior
art
or
thoughts
or
whatever
feel
free
to
Ping
in
the
dock
or
send
a
carrier,
pigeon
or
whatnot
and
hope
to
keep
updating
you,
hopefully
next
meeting
with
some
more
deets
okay.
So
those
are
the
more
updated
ones
and
then
two
things
I
have
some
sort
of
questions
or
discussion
point
for
the
group,
and
this
is
kind
of
because
I'm
gearing
up
for
MCs
API
review
and
just
to
remind
everybody,
there's
an
API
review
process
by
which
things
which
use
names
like
Kate's.
B
That
use
like
the
domaincase.io
need
to
undergo
MCS
API
is
actually
not
in
this
category
because
it's
out
of
tree
and
is
using
an
X
cates.io
API
Group,
but
there
was
some
interest
in
undergoing
a
voluntary
API
review,
which
is,
you
know,
allowed
not
necessarily
blocking
to
as
a
graduation
requirement,
but
just
in
case.
So
two
things
that
have
been
coming
up,
in
my
view,
is
just
generally
about
service
import,
dot
status
and
right
now
the
spec
to
find
something
called
service.
B
Import.Status.Clusters
where
clusters
is
the
list
of
class
clusters,
is
the
list
of
clusters
that
contribute
to
this
service.
So
like,
if
you're
in
a
cluster
that
has
some
service
Imports
sorry,
my
Doc's
working
out
that
has
some
service
import
and
it
has
like
that
that
service
is
being
exported
from
two
other
clusters:
Like
A
and
B.
Then
this
is
the
list
of
the
Clusters
A
and
B
right,
I'm
curious
to
know
if
this
isn't
any
implementations
now
in
general,
the
service
import
status,
I
feel
is
not
like
super
well
defined.
B
In
fact,
in
the
spec
itself
it
doesn't
really
say
anything
it,
but
it
is
in
the
the
API
Proto
and
there's
like
a
a
comment
on
it,
and
one
thing
I
can
definitely
see
it.
Possibly
for
is
there's
a
suggestion
to
do
endpoint
TTL
by
like
establishing
leases
with
other
clusters
and
then
checking.
If
you
still
have
connectivity
with
that
cluster
and
using
that
information
to
decide
if
you're
going
to
retire,
some
endpoint
slices
from
potentially
old
back
ends
but
yeah.
B
So
I
know
that
was
a
lot
of
like
details,
I
guess
because
I
was
recently
reading
it.
But
does
anybody
have
some
thoughts
or
comments
on
service,
import,
status.clusters
or
just
the
status
field
in
general
for
service
Imports
and
is
this
being
used
in
any
implementations
today?
I
see
a
hand
from
kubernetes
Sig
multi-cluster,
which
I
think
is
Jeremy.
E
D
Hear
me:
oh
yes,
it
looks
like
okay.
All
it
takes
is
three
refreshes
and
and
the
zoom
web
client
and
pick
up
audio
again.
Okay,
I
think
we
may
have
over
specified
here.
I
think
we.
We
came
up
with
a
bunch
of
use
cases,
so
I
guess
I
I
think
my
sentiment's
the
same
as
Mike,
but
I'm
using
kind
of
opposite
wording.
We
definitely
under
emphasized
the
the
use
of
these
fields,
but
I
also
just
don't
know
that
we
have
solid
use.
D
F
D
Don't
know
that
that
a
given
implementation
might
actually
need
it.
The
same
way
I.
D
Good
info
for
observability,
but
I,
wonder
if
this
isn't
just
stuff
that
should
be
better
I
hate
doing
this,
but
shoehorned
into
conditions
when
we
Define
this
I
think
the
idea
was
that
conditions
like
basically
a
status
API
that
consists
of
30
different
conditions.
All
accustomed
to
each
provider
is
kind.
D
A
D
The
same
time,
if
everybody's
implementing
things
in
a
different
way-
and
we
don't
yet
have
a
solid
use
case-
I'm
not
as
clear
if
this
is
beneficial
to
keep
or
if
it
just
adds
complication.
B
Yeah
I
it
just
it
definitely
didn't
have
I
didn't
feel
like
I
had
the
its
use
case
or
its
motivation
well
established,
certainly
in
the
spec.
So
I
don't
know
if
it
will
survive
API
review,
but
if
people
were
using
it
or
feel
that
it
needs
to
be
there
for
some
other
tooling
generic
tooling
purpose,
then
I
want
to
check
but
yeah,
as
you
say
like
since
this
is
a
there's,
a
lot
of
implementation
free
reign,
especially
on
the
like
how
the
service
import
got
their
side.
B
Then,
especially
compared
to
like
the
service
export,
which
I
feel
the
statuses
are
a
bit
more
clear
and
like
we
have
more
jurisdiction
to
mandate
what
they
are.
The
service
import
feels
a
little
less,
so
I
feel
like
it
could
be
a
bit
more
clear
in
the
spec.
How
important
that
is.
Even
the
endpoint
TTL
part
like
in
the
cup
it
says
like
this
is
suggested,
not
required,
but
could
make
it
a
little
bit
more
clear
yeah.
B
E
G
I,
don't
know
I'm
speaking
a
little
bit
out
of
turn
here,
because
I
don't
know
how
we've
implemented
the
controller
that
we
use
over
on
gke,
but
I
can
imagine
that
from
a
user's
perspective.
One
point
of
frustration
I
have
is
that
when
things
are
broken,
it's
really
hard
to
find
out
how?
So,
if
this
service
status,
dot
clusters
export
or
employ
I
feel
like
it
would
be
more
useful
in
the
export,
but
either
way
if
this
could
somehow
inform
whatever
Upstream
controller.
G
That's
written
to
service
that
information
to
me
as
a
user
faster
and
more
accurately,
it
could
be
useful,
but
maybe
there's
better
ways
to
do
that.
Yeah.
B
On
the
service
export
side,
we
provide
room
for
conditions
and
a
specifically
a
specific
condition
we
require
is
a
is
whether
service
export
is
valid
and
whether
the
service
export
has
a
conflict,
so
that
type
of
thing
does
exist
with
these
two
explicitly
in
the
spec
defined
cases
on
the
service
export
side
and
then
I.
B
Think
of
the
service
import
side
where
we're
trapped
here
is
like
is
mandating
that
the
Clusters
are
listed
in
service
import
with
honestly,
we
have
very
little
definition
right
now
of
how
clusters
are
even
supposed
to
be
referred
to
there
is
that
giving
useful
information
for
either
the
MCS
controller,
consuming
on
the
consumer
side
or
as
you're
bringing
up
the
end
user
I
think
we
only
have
maybes
for
the
first
thing
like
for
endpoint
TTL,
and
then
we
have
no
for,
for
the
end
user,
I
think
for
this
case,
but
yeah
I'm
still
open
to
some
ideas
on
that.
F
Sorry
this
is
Tom
I
work
with
Stephen
on
submarena
hi.
We
actually
do
use
clusters.
Well,
we
just
need
a
singular
cluster
ID
field
I
mean
the
our
implementation
currently
does
not
aggregate
the
circuit
service
reports
imported
from
multiple
clusters,
because
we
need
to
per
retain
well,
for
instance,
a
certain
cluster
specific
information
like
the
ports,
the
service
ports.
F
F
So
that
said,
do
we
need
the
field?
Not
necessarily.
We
can
always
use
an
annotation
for
that,
but
now
we
we
also,
we
got
some
discussions.
Maybe
this
is
a
side
top.
Are
we
deviating
from
the
spec
by
not
physically
aggregating?
The
server
supports,
possibly,
but
you
know,
RM
teach
maybe
is
a
little
bit
different.
Did.
D
F
Yeah,
it's
I'm,
like
I
said
we
don't
technically
need
I
mean
we
could
also.
You
know
embed
any
annotation
in
there
to
yeah
the.
D
Label
well,
but
whenever
I
hear
that,
though,
I
hear
that
we
really
need
a
field
like
annotation
annotation
apis,
in
my
mind,
are
always
just
like
V1
until
you
agree
to
add
a
field
because
otherwise
you
just
have
unlimited
annotations.
So
yeah.
F
For
sure
yeah
I'm
interested
I
mean
also
I
mean,
like
I,
said,
one
of
the
reasons
we
need
that
cluster
ID
field
is
be
or
we
use
it
is
a
clusters
field
is
because,
like
I
said
we
you
know,
because
the
service
import
doesn't
have
a
provision
or
doesn't
to
store
cluster
specific
information
like
cluster
service
service
ports
per
cluster.
So
if
we
were
to
aggregate,
that's
one
reason
well,
it's
probably
the
reason
we
don't
aggregate
is
because
we
need
to
maintain
per
cluster
information.
Okay,.
B
F
Yes,
okay
per
cluster,
so
in
the
service
port
information
isn't
aggregator
or
the
Union
at
that
point,
because
we
need
to
communicate
the
cluster-specific
service
ports
and
then,
on
the
back
end
side.
We
that's
where
we
kind
of
do
the
in-memory.
We
do
the
aggregating,
the
union
of
the
ports
and
all
that
stuff
so
anyway,
but
that's
our
implementation
of
it,
whether
you
know
whether
we're
not
in
compliance
or
you
know,
conformance
or
not,
but
right
we
might
not
be,
but.
B
G
F
F
As
well,
but
to
do
that,
but
so.
D
F
F
B
Okay,
yeah
I
think
because
yeah
I
think
if
the
cluster
field
is
populated
with
information
about
the
cluster,
the
service
import
is
in
right.
B
Now,
then
that
feels
like
a
use
case
for
about
API,
but
if
it
is,
if
the
Clusters
field
is
filled
out
with
not
the
current
cluster
like
it's
referring
to
the
a
foreign
cluster,
then
yeah
I
think
that
is
a
case
where
it
may
be
that
this
isn't
entirely
up
to
like
equivalent
with
how
service
Imports
are
defined
in
the
spec,
because
they
are
intended
to
be
aggregated
and
do
conflict
resolution.
But
if
you
have
that
problem,
then
we
need
to
figure
out
where
that
needs
to
go
well.
F
We
we
could
do
I
mean
public
resolution
without
necessarily
physically
aggregating
it,
but
you
know
that
said:
I
mean
you
know.
We
certainly
would
want
to
be.
You
know
conformant
so,
but
for
us
I
mean
one
thing
we
could
do
is
still
advertise
the
per
cluster
service
report.
So
we
can
maintain
the
per
clusters
like
Service
Port
information,
but
also
have
an
aggregate
as
well,
but
yeah.
Now.
B
B
Yeah
and
I
see
Steven's
comments,
because
this
audio
isn't
working
too,
but
about
this
is
tied
to
what
ends
up
in
the
source.
Cluster
label
and
endpoint
slice
yeah
100
degree
that
the
usage
sounds
similar
to
what
the
cluster
label
and
endpoint
slice
is
trying
to
do
where
service
import
is
like
one
unit
and
endpoint
slice
has
all
the
per
cluster
like
that
is
the
you
know,
per
where
per
cluster
information
is
I.
B
B
F
Yeah
I
mean
if
we
can
yeah,
if
that
priority
for
me,
should
go
in
there
at
some
point
but
slice,
but
so
so
the
other
thing
with
our
implementation
and
again
maybe
this
doesn't
really
match
the
speckers.
F
Is
the
fact
that
we
don't
like
create
a
cluster
set
wide
zip
to
represent
the
service?
We?
Actually
each
cluster
advertises
its
own?
Well,
actually,
we
advertise
the
actual.
If
it's,
if
it's
a
head
non-headless
and
we
advertise
the
cluster
IP
of
the
service
or
a
what
we
call
it,
a
global
IP
that
we
might
assign,
which
is
for
overlapping,
to
support
you
know
overlapping
ciders.
So
so
that's
another
thing
and
again
and
I
think
aggregation
also
assumes
that
you
have
a
single
cluster
wide
VIP.
F
To
the
service
import,
which
again.
F
But
I
mean
the
spec,
it
kind
of
says:
oh
it
it's
it's
a
single
clutch,
although
there
is
one
sentence
in
there
and
if
I
can
find
it
that
is
sort
of.
Let
me
see
if
I
can
I
have
this.
If
I
can
find
the
spec.
There
is
something
that
says:
oh
boy,
if
I
can
find
it.
Oh
actually
I
did
I
just
got
it
so
Clarkson
said
IP,
so
it
states
that
this
IP
may
be
a
single
ipus
cluster
set
wide
or
assigned
on
a
per
cluster
basis.
F
Yeah
we
do
the
latter
so
now
an
aggregated.
Now
again,
it's
the
reasons
we've
seen
if
you,
if,
if
an
inflation
does
do
the
ladder
like
we
do,
should
we
just
put
all
the
IPS
in
the
list?
Maybe
that's
how
we
theoretically
aggregate
but
again
somebody
kind
of
consumed
that
for
some
you
know
outside
of
our
implementation,
how
would
they
be
able
to
consume
multiple
IPS
I,
don't
know,
that's
I
mean
the
spec.
Doesn't
talk
about
that?
That's
the
only
thing
that
says
anything
about
per
cluster
IPS.
Is
that
one
little
blob
there?
E
F
D
I
I
will
say:
gke
does
per
cluster
bit
as
well,
so
we
we've
definitely
made
that
okay,
how.
D
Well,
the
the
VIP
is
for
the
consumer
yeah
the
producer
we
so
the
that
this
represents
all
the
endpoints
for
all
the
consumers
or
all
the
end
points.
D
Cluster
gets
its
own,
get
the
tone
bit.
B
Yeah
so
I
think
it
sounds
like
on
the
Submariner
side,
the
consumer
has
many
IPS,
a
VIP
for
each
exporting
cluster,
but
in
gke
it's
that
the
service
import
has
one
IP,
which
is
the
way
out
to
get
to
all
back
ends
and
all
other
producing
clusters.
So
it's
just
like
where
the.
B
E
B
I'm
trying
to
track
this
back
to
is
there
something
we
need
to
do
in
terms
of
either
removing
from
simplifying
clarifying
tightening
question
mark
what
service
import
dot
status?
Dot
clusters
should
or
should
not
be.
B
It
sounds
like
something
in
the
service
import
that
is,
that
can
hold
per
exporting
cluster
information
is
of
interest,
and
potentially
it
sounds
potentially
it
to
me.
It
seems
the
spirit
of
the
spec
was
that
endpoint
slice
is
the
land
for
that,
but
I.
What
I'm
hearing
is
that
in
practice,
service
import
is
at
least
a
stopover
for
that
information
for
that
per
cluster
information,
metadata,
I
guess,
and
up
till
now,
service
import
has
been
very
simplified
in
terms
of
like
resolve
conflicts.
B
F
Yeah
I
think
so
I
mean
certainly
I
mean
I
I,
understand
the
aggregation
and,
having
like
you,
said
that
condense
things
into
a
single
non-conflicted
sort
of
merged
field,
and
maybe
the
spec
as
it
stands
now
serves
that
purpose
like
the
spec
server
Sports
is
merged,
Union
and.
F
B
B
Producing
cluster
metadata
in
the
service
import
right
now
we
keep.
B
All
of
that
and
the
endpoint
slice,
but
it
seems
there
is
more
use
for
it.
Okay,
so
I'll
try
and
consolidate
some
of
those
thoughts
in
there.
Certainly
I,
don't
think
it's
clear
in
the
kept
at
least
now
like
what
that
status
field
could
or
should
be
used
for,
and
if
nothing
else,
we
can
keep
it
kind
of
at
least
like
that.
This
is
where
per
cluster
stuff
could
go,
and
then
we
can
see
if
we
need
to
Define
that
more.
F
Okay,
thanks
for
for
considering
yeah.
B
B
So
I'm
going
to
change
the
topic
and
table
that
topic
for
now
so
for
headless
DNS
for
a
headless
service.
Right
now
you
need
to
know
the
okay.
So
for
a
headless
service
in
general,
you
usually
need
the
hostname
of
the
Pod
to
disambiguate,
which
specific
pod,
where
you
want
to
contact,
and
then,
in
the
case
of
multi-cluster,
we
said
we
also
have
to
include
the
cluster
name,
because
now
you
could
have
pods
with
the
same
host
name
and
cluster
a
and
cluster
B.
B
So
you
actually
need
one
more
piece
of
information
as
a
coordinate
to
find
them.
You
need
cluster,
a
dot,
hostname,
dot,
namespace
Etc.
Since
then,
people
have
also
been
thinking
about
region
as
a
disambiguating
feature
and
whether
this
should
be
part
of
the
DNS
spec
in
general.
If
this
is
something
that
needs
to
be
embedded
somehow
in
the
service
name
or
something,
and
for
me
knowing
very
little
about
it,
I
didn't
know
if
this
was
basically
like
important
enough.
I
guess
to
like
have
its
own
sub
label.
B
But
there
are
other
cases
and
increasing
cases
where
regionality
is
like
more
of
a
first
class
thing
like
for
like
networking
purposes
and
just
in
general,
as
a
concept
regionality
is
more,
is
more
of
a
harder
boundary
more
similar
to
a
harder
boundary.
B
So
I
just
wanted
to
get
people's
opinions
and
thoughts
about,
and
I
I'm,
asking
very
specifically
about
headless,
multi-cluster
DNS,
but
I
feel
like
this
decision
kind
of
or
whatever.
This
discussion
also
applies
to
just
generally,
how
we
think
about
multi-cluster
and
how
we
think
about
what
we're
setting
users
up
for
in
terms
of
the
like
trust
boundary
between
clusters
in
different
regions.
B
So
my
my
most
direct
concrete
thing,
I
want
to
figure
out,
is
whether
had
multi-cluster
Services
headless
DNS
should
leave
room
for
a
label
in
the
DNS
name
for
the
region
that
the
cluster
is
in
as
a
disambiguous
as
a
required
or
suggested.
These
are
two
options:
disambiguating
coordinates
similar
to
the
cluster
name
and
then
more
generally,
what
people's
thoughts
are
about
regionality
being
kind
of
like
a
first
class
concept
within
a
cluster
set
for
just
in
general
signal
type
clusters.
Future
projects.
D
Try
try
this
eighth
time
in
my
take.
This
makes
sense.
I,
think
concept
of
location
is
pretty
core
to
multi-cluster
I.
Don't
know
that
I
would
call
it
region
I,
think
that
might
be
too
specific.
Just
location,
yeah.
B
D
That
could
be
rack,
it
could
be
data
center.
You
know
if
I'm
thinking
on-prem
it
could
be,
it
could
be
Zone,
it
could
be
region,
but
some
Regional
identification,
like
location-based
identification,
makes
sense,
I.
Think
like
every
every
cloud,
every
larger
scale
platform
is
going
to
have
some
some
form
of
tiering,
where
you're
going
to
want
to
be
able
to
identify
subsets.
B
Those
are
of
like
some
sort
of
different
tier
I
guess,
whereas
something
like
location
is
at
a
tier
where,
for
both
this
Regional
headless
or
sorry
for
headless
for
headless
pod,
DNS
to
be
regionalized
or
sorry
locationalized,
and
then
maybe
future
multi-cluster
tooling.
Be
aware
that
this
is
a
boundary
that
multi-cluster
users
will
always
want
to
respect
in
some
way.
G
So
one
of
the
use
cases
that
prevents
customers
that
I'm
working
with
from
using
MCs
and
sends
them
towards
the
service
mesh,
for
example,
is
the
inability
to
set
a
location
preference
like
if
I'm
in
zone,
one
first
unless
zone
two
is
not
and
then,
if
someone's
unavailable
go
to
zone
two,
but
does
this
tie?
Does
this
network
topology
sort
of
make
implementing
that
sort
of
feature
easier,
or
is
that
something
that
and
please
pardon
my
ignorance,
I'm
not
in
the
controller
code
like
all,
are,
but
is
that
something
that's
solved
elsewhere?.
B
It's
not
solved
in
a
generic
way
in
the
spec.
There
is
a
little
bit
of
overlap
with
and
I
would
also
say
it's
not
solved
in
a
generic
way
yet
for
single
cluster
there's
some
progress
going
on
there
topology
aware
routing
that
we
overlapped
with,
but
our
conclusion
with
what
Sig
network
has
for
that
right
now
is
that
basically
multi-cluster
doesn't
super
meaningfully
participate
or
sorry
multi-cluster
Services
can't
super
meaningly
participate
in
it.
E
Very
various
levels
of
locality
and
I
think
like
to
network
topology,
one
of
the
things
that
I'm
trying
to
wrap
my
head
around
this,
like
they
feel
like
German
like
zone
region
data
center,
Iraq
like
there
could
be
various
nested
levels,
some
like
within
a
cluster
set.
They
might
be
all
in
the
same
Cloud
availability
Zone,
but
it
was
sold,
probably
prefer
local
cluster
over
a
service
that
is
important
for
a
remote
cluster.
E
That's
something
that
we
kind
of
just
assumed
yeah,
but
maybe
there's
a
need
for
that
to
be
more
explicit
or
to
like
respect.
The
Cube
hints
that
I'll
do
our
hints.
E
D
B
It's
kind
of
where
I'm
at
yeah
right,
yeah
and
that's
one
kind
of
scary
thing
about
it,
because
there
was
like
yeah
one
variation
of
this.
That's
like
okay!
Well,
your
cluster
name.
We
gave
you
63
characters
so
like
disambiguate,
it
there
or
in
your
service
name
separate
out
by
region
and
put
that
in
your
service
name.
You
have
63
characters
there
and
then,
on
the
other
hand,
the
idea
of
providing
space
for
like
arbitrary
subnested
amounts
of
location
metadata
in
DNS
is.
B
It
is
like
its
own,
like
big,
can
of
worms
to
be
opened,
but
yeah
I,
think
I.
Think
if
we
can
come
up
like
if
we
think
location.
B
B
And
you
know
that's
what
I
want
to
figure
out
most
like
tactically,
but
I
do
feel
that
you
know
if
you're
having
this
experience
and
Sig
Network
and
Gamma
and
Gateway,
and
we're
having
this
experience,
where
you
know
once
we
let
people
cross
these
like
higher
latency
boundaries
by
making
these
accessible
across
clusters,
then
like,
is
that
eventually,
as
that
grounds
for
this
needs
to
be
something
that
the
user
has
like
routing
configuration
control
over
and
in
our
case,
the
way
we
would
expose
that
is
in
DNS.
B
Okay
Stephen
says
they
could
also
be
scalability
boundaries
in
terms
of
the
amount
of
data
that
needs
to
be
shared
across
a
cluster
set.
B
Yeah,
so
yeah,
okay,
this
is
good
good.
First
thoughts,
yeah
I,
think
I
at
first
I
just
need
a
temperature
of
like
no.
We
don't
want
location
with
us.
We
don't
want
to
get
in
that.
No,
if
that
was
going
to
be
the
answer
or
if
it
was
more
like
yeah
I,
do
think
this
is
a
problem
either
we
have,
or
my
customers
have
or
we're
getting
proposals
for
now
or
Etc,
and
it
feels
like
it's
more
strongly
on
that
second
point.
D
Yeah,
can
we
can
we
explore
how
to
get
if
we
can
address
this
at
the
you
know
that
top
most
flexible
level
yeah
to
the
rest,
so
yeah
like
yeah
location,
not
region
and
like
what
are
the
right?
What's
the
minimum,
we
can
Define
that
creates
the
room
necessary
for
implementers.
Without
you
know,
adding
any
constraints.
B
Right
yeah,
in
my
mind,
it's
the
options
are
like:
do
nothing
and
leave
the
DNS,
as
is
which
I
think
we
don't
like
make
another
label
spot
for
location
where
implementers
can
Define
whatever
DNS
sub,
whatever
sub
label
that
defines
their
location
and
then
I
think
the
most
flexible.
But
then
part
is
to
predict
on
the
DNS
side.
If
there's
like
tooling
or
something
is
allow
arbitrary
number
of
sub-labels
to
describe
a
nested
location
in
between,
like
in
between
cluster
name
and
the
service
name.
B
So
that's
where
I
am
right
now,
but
I
will
I'll,
make
some
slides
and
bring
slides
next
time.
How
about.
B
Had
it
it
has
been
too
long,
you
know,
I,
just
don't
have
my
I
didn't
have
my
slide
game
going,
but
you
know
I'm
ready
to
to
present
slides
to
Sig
multi-cluster.
Yet
again,
some
people
in
the
in
the
video
seem
worried,
but
yeah
I
have
a
reputation
for
needing
to
draw
things
on
slides
and
not
only
say
them
in
words.
So
well.
D
E
E
Reconfirm
for
allowing
you
some
control
over
this,
should
we
still
assume
that
the
same
this
principles
apply.
D
E
E
F
Oh
yeah
I,
just
one
other
quick
thought
on
going
back
to
the
cluster
specific
information
and
annoying
how
much
time.
But
it's
just
a
quick
idea,
thought
perhaps
the
service
support
you
know
as
it
is,
could
still
be
the
the
aggregated,
unified
information,
but
perhaps
introduce
a
service
import
Slice
on
the
side
as
well.
So
along
the
input,
I,
don't
know
just
a
good
idea,
but
okay
in
my
mind,
and
maybe
it
doesn't
make
sense.
But
it's.
F
Yeah,
that's
just
an
idea
yeah
that
would
have
the
per
cluster
and
say
service
and
Port
information
or
service
of
important
information
that
maybe
wouldn't
be
required
by
the
spec
necessarily
but
anyway,.
C
B
Yeah
I
feel
wrapped
and
I'm
gonna,
take
these
thoughts
and
try
and
make
some
PRS
that
consolidate
them
as
far
as
I
think
they
can
go
so
that
and
again
this
is
all
in
service
of
the
voluntary
API
review.
But
I
will
tag
in
people
who
were
discussing
this
here
and
make
feel
free
to
give
your
other
topic.
C
Thank
you
a
lot
just.
Could
you
scroll?
Oh
sorry,
I
can't
the
name
of
the
at
the
beginning
of
the
document.
You
know
we
mentioned
after
kubecon
us
there's
talk
about
like
cross
cluster
controllers
yeah
and
then
also
about
multi-cluster
control
planes.
There
were
some
interesting
community,
so
I
I
wonder
if
there
are
any
follow-ups
on
that
from
item
Max
or
maybe
I
didn't
catch
AI.
C
Are
there
any
other
interests
from
the
community
about
that
like?
We
would
like
to
make
a
presentation
about
that
as
well.
Yeah.
B
I
think
there's
a
lot
of
interest.
I
think
we
need
to
get
all
of
us
together
or
someone
needs
to
kick
off
like
the
discussion
thread
and
pull
together
some
prior
art
there's
like
it's
a
bit
split
between
there's
like
a
slack
conversation
but
that
guy
sort
of
like
slowed
down
after
kubecon
so
yeah
all
I
can
really
say
is
there's
definitely
a
lot
of
interest.
I,
don't
think
it's
the
thing
that
like
has
the
one
pager,
yet
you
know
that
we
can
consolidate
around
so
I.
B
C
C
D
Awesome
this
was
a
great
great
meeting,
Happy
New
Year
everyone.
By
the
way.
This
is
our
first
time
talking
thanks
for
the
great
discussion
it
looks
like
we
have
some
good
follow-up.