►
From YouTube: Kubernetes SIG Multicluster 2020 June 9
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
Okay,
it's
it's
still
giving
me
the
same
dialogue,
so
maybe
we'll
give
it
a
few
minutes
and.
A
Okay,
so
on
the
agenda
today,.
C
Cool
okay,
so
so
the
first
thing
there
was
was
bringing
about
discussions
about
cluster
id,
and
so
I
just
added
a
link
to
the
doc
there,
and
I
think
it's
probably
worth
discussing
a
bit
today,
but
my
hope
was
really
that
we
could
kind
of
spur
people
to
to
read
through
this
again
and
think
about
this
and
maybe
have
a
more
in-depth
conversation
next
week,
but
as
part
of
multi-cluster
services.
It's
become.
C
C
What
that
can
look
like-
and
I
think
you
know
this-
the
simplest
one
and
all
that's
in
the
multi-cluster
services
kept
right
now
is
that
this
this
cluster
name
needs
to
be
a
valid
dns
label.
But
if
we're
going
to
start
adding
constraints
like
that,
you
know
it
seems
like
some
kind
of
consensus
around
a
cluster
id
is
starting
to
make
sense
again-
and
you
know
this
there's
a
pretty.
You
know
thought
out
proposal
that
was
circulating
a
little
over
a
year
ago.
C
It
looks
like,
or
you
know,
towards
the
end
of
last
year
I
think
was
the
last
time
it
really
got
eyes,
but
now
we
have
a
concrete
use
case,
the
first
one
with
multi-cluster
services
and
probably
another
one
with
the
work
api.
And
so
you
know,
maybe
we
should
start
thinking
what
that
looks
like
again.
C
Kind
of
I
think,
there's
a
few
things
kicking
around
like
this,
so
needs
to
be
a
valid
dns
label.
Maybe
it's
a
you
know,
a
well-known
resource
that
that
you
know
non-namespace,
well-known
resource
that
can
be
created
by
a
cluster
admin,
and
we
say
it
needs
to
be
unique.
C
You
know
kind
of,
like
we
we've
said
with
namespace
sameness,
applying
within
the
domain,
that
basically,
you
define
same
kind
of
thing
for
cluster
id
like
it
needs
to
be
unique
within
that
domain,
and
I
think
you
know
there
was
some
discussion
in
in
the
original
proposal
whether
or
not
it
should
be
mutable
or
immutable.
C
It
seems
like
at
some
point
you
might
want
to
change
it
so,
but
I
mean
in
a
minimal
implementation,
you
could
just
say
it's
mostly
immutable
and
you
may
need
to
redeploy
a
whole
bunch
of
things
if
you
change
it,
but
basically
we
don't
recommend
changing
it.
I
think
it
probably
needs
to
be
more
defined
than
that.
A
Yeah,
I'm
sort
of
wondering
in
the
sense
of
like
the
ways
in
which
it
surfaces
into
multi-cluster
services,
I'm
trying
to
avoid
taking
on
a
dependency
on
feeling
like
we
have
solved
cluster
id
entirely
to
move
forward
and
I'm
sort
of
thinking
to
myself.
A
Could
we
could
we,
like
put
it
into
a
like
a
very
informal
api
location
to
start
with,
like
maybe
it's
a
property
of
the
controller
that
you
can
configure
to
to
know
that
it's
supposed
to
report
its
cluster
id
as
x,.
C
Right,
yeah
and-
and
so
I
think,
with
the
mcs
kep,
that's
kind
of
where,
where
it's
headed
now
like,
let's
not
block
on
defining
cluster
id
I
mean
all
the
mcs
cap
says
is:
is
that
the
implementation
needs
to
assign
each
cluster
some
id?
That
is
a
valid
dns
label
and
that
from
the
ncs
standpoint
that
seems
like
enough.
But
if
we're
going
to
start
using,
I
mean
the
work
api
is,
is
you
know
a
new
discussion
that
we've
been
having
too?
But
if
that's
going
to
have
its
own
id
it?
C
A
D
C
A
E
D
Think
it's
fine.
I
think
it's
fine
to
not
block.
My
fear,
though,
is
that
we
end
up
with
a
bunch
of
different
things
that
are
all
doing
it
kind
of
different
and
then
reconciling
that
is
difficult,
and
you
know
the
first
time
cluster
id
came
up
was
what
probably
two
and
a
half
years
ago-
and
we
said
we're
gonna
need
this,
and
then
it
went
away.
A
So
I've
I
get
where
you're
coming
from,
and
I
here's
what
I
specifically
am
thinking
of
as
like
a
path
forward
with
regard
to
formalizing
cluster
id
as
we
make
progress
on
the
multi-cluster
services
and
work.
A
So
I
what
I
expect
is
that
work
is
like
logically,
very
close
to
to
the
point
where
it
will
make
sense
to
define
cluster
id
in
the
context
of
work.
But
I
I
am
not
sure
if,
like
so,
I
know
that
valerie's,
like
employment
life,
has
been
kind
of
disrupted.
A
Recently,
I
wouldn't
want
to
to
block
on
work,
but
I
suspect
that
we'll
have
like
a
at
least
a
good
shot
at
being
in
a
situation
where,
if
we
like,
if
we
take
some
initial
steps
with
multi-cluster
services,
we'll
probably
have
another
example
of
cluster
id
like
that.
We'll
be
able
to
formalize
and
the
other
example
would
be
work.
I
think
we
can
probably
formalize
them
both
while
the
apis
are
still
alpha.
A
I
and
and
I'd
want
to
not
formalize
cluster
id
without
at
least
one
more
real
use
case
that
exists
today
like
within
within
the
scope
of
the
sig.
So
I
am
sort
of
thinking.
Maybe
we
can
throw
the
football
just
a
little
bit
longer
and
leave
ourselves
room
to
work
backward
into
a
formal
cluster
ide,
as
we
get
to
the
point
where
we
need
it
with
work
right.
Does
that
seem
reasonable
to
people.
C
A
I
think
that's
that's
a
very
good
idea
and
when
work
enters
the
kep
phase,
like
of
its
existence,
I
would
say
that's
a
similar
criteria
for
alpha
to
beta
graduation
for
work
cool.
A
But
I
don't
I
agree
tim.
I
don't.
I
don't
want
to
be
in
a
situation
where
we
grow
like
a
bunch
of
different
concepts
that
are
similar
but
disjoint
and
be
in
that
situation
a
year
and
now,
a
year
from
now
where
we're
like.
Oh,
we
should
have
really
put
the
energy
behind
that.
A
Yeah,
I
would
expect
that,
like
three
to
four
months
from
now
will
be
in
this
point,
where
we'll
need
to
we'll
have
two
real
examples
and
we'll
be
able
to
formalize
and
like
work
backward
from
that.
C
Yeah
yeah,
that
makes
sense
cool
so
that
so.
A
Well,
if,
if
we're
gonna
proceed
with
that
plan,
do
we
want
to
like
do
we
want
to
like
kind
of
like
put
the
the
a-frame
up
for
the
house
a
little
bit?
A
It
seems
like
it's
a
it's
a
sub-domain,
and
maybe
for
now
we
could
even
like
say
that
it's
immutable.
C
Yeah,
I'm
I'm.
I
think
that
kind
of
makes
sense
like
honestly
in
the
context
of
mcs.
If
you
did
change
it
it
would.
It
would
be
kind
of
like
redeploying
all
of
the
like
from
the
from
the
perspective
of
the
service,
the
multi-cluster
service.
It
would
be
like
redeploying
all
the
pods
in
that
cluster
in
another
cluster
right
like
if
it
was
headless,
they
would
all
get
new
names
yeah.
So
with
that,
though,
I
don't
know
that
we
really
need
to
specify
that
it
is
immutable.
C
I
mean
it
would
be
like
moving
the
cluster
and
and
all
of
the
side
effects
that
come
with
that.
C
A
Well,
it
seems
like
functionally.
It
can
be
thought
of
as
immutable
for
the
time
being,
at
least
because
you
can
think
of
mutation
as
like
deletion
and
then
recreation.
True.
C
True
and
yeah,
actually
that
might
be
a
good
way
to
describe
it
so
just
say
that
mutation
is
equivalent
in
general
to
deletion
and
creation.
Otherwise
you
know
and
again
I
don't
want
to
dig
too
much
into
what
this
future
could
look
like.
But
if
you
can
actually
change
it,
then
every
consumer
needs
to
be
aware
of
that
ability
right.
A
Okay,
so
maybe
we
can
move
on
then,
if
we
think
that's
enough
framing
on
that.
C
Okay,
cool,
so
then
the
second
thing
was
kind
of
updates
on
multi-cluster
services.
I
think
you
know
really
what
I'd
like
to
get
out
of
today
before
digging
into
this
is
an
understanding
of
what's
left
before
we
can
consider
this
implementable
for
alpha.
C
And
so
I
we
did
an
api
review
last
week
with
david
eads
and
jordan
liggett
to
kind
of
get
an
update
on
things
and
and
some
of
the
ideas
that
came
out
of
that
I
just
kind
of
wanted
to
go
over
with
everyone,
but
then
yeah.
The
the
big
thing
I'd
like
to
hopefully
understand
today
is
is
what
what's
left.
What
work
needs
to
be
done
before?
We
feel
good
about
calling
it
implementable.
A
C
Cool,
so
the
I
think,
the
first,
the
first
thing
that
kind
of
came
up
out
of
some
comments
on
the
pr
and
the
api
review
as
well
was
the
idea
of.
C
Can
a
service
be
selectively
imported
into
some
subset
of
clusters,
and
this
this
has
been
coming
up
a
few
times
and
hasn't
been
addressed
yet,
and
so
I
think
we
want
to
make
sure
that
namespace
sameness
still
holds
and
if
the
namespace
exists
as
a
service,
but
I
don't
think
we
necessarily
need
to
require
that
the
namespace
exists
in
every
cluster.
So
if,
basically,
if
a
cluster
didn't
have
a
namespace,
it
wouldn't
import
the
service.
C
If
it
did
have
a
namespace,
it
would
does
that
kind
of
make
sense.
I
think
that
that
was
some
of
the
idea
that
came
out
of
you
know.
Vishal,
I
think
mentioned
something
similar.
A
I'm
I'm
reading
and
thinking
I'm
not
exactly
sure
what
you're
asking
about.
Does
it
work.
C
I
mean
I
mean:
does
it
does
it
makes
sense?
So,
basically,
because
so
far
we've
always
talked
about
you
know.
Every
cluster
in
your
super
cluster
would
import
the
service,
but
if
there
was
a
namespace
that
I
didn't
want
in
a
cluster
in
my
supercluster
at
all,
then
it
seems
like
it
would
be.
That
would
be
an
acceptable
situation
and
then
the
service
wouldn't
be
important.
A
Meaning
you
have
a
cluster
in
the
super
cluster
that
the
name
space
shouldn't
exist
in.
C
Right
like
if
we're,
if
we're
saying
that
namespaces
are
owned
by
the
same
owner
everywhere,
it
does
seem
reasonable
that
certain
owners
and
teams
are
not
allowed
in
certain
clusters,
and
in
that
case
then,
nothing
in
that
name.
Space
would
exist,
but
that's
kind
of
the
level
of
granularity
like.
I
don't
think
we
want
to
go
further
and
say,
like
you,
can
selectively
import
a
given
service,
because
that
starts
breaking
the
namespace
sameness,
where
the
namespace
is
treated
the
same
everywhere,
but
at
the.
C
Right
yeah,
so
so,
instead,
it's
basically
saying
like
if
namespace
foo
exists
in
a
cluster
and
there
is
a
service
in
your
super
cluster-
that's
been
exported
for
namespace
foo.
Then
it
will
be
imported
into
namespace
food.
If
namespace
food
does
not
exist
like
the
implementation
is
not
necessarily
expected
to
create
it
and
then
so
we
just
wouldn't
import
that
service.
If
the
namespace
isn't
there.
D
Emphasis
on
implementation
and
not
necessarily
required
right,
so
it
would
be
valid
implementation
to
say
I'm
going
to
go,
make
the
namespace
in
all
your
clusters
or
all
the
clusters
with
the
selector
and
import
into
all
of
those.
But
it
would
also
be
valid
implementation
to
say
you
create
the
namespaces
and
I
will
just
use
them
if
they
exist
right.
D
A
So
I'm
maybe
confusing
myself.
I
in
my
mental
model,
like
I,
have
to
make
a
service
import
if
I
want
to
import
the
service,
but
I'm
hearing
more
of
like
something
is
orchestrating
that
and
automatically
creating
it
if
the
namespace
exists-
and
I
wonder
if
I'm
just
like
do
I
have
the
wrong
thing
in
my
head.
D
Here,
yeah
right
there
there's.
I
want
to
export
my
service
to
be
part
of
the
the
hole
and
there's
the
I
want
to
consume
services,
potentially
from
other
clusters.
I
want
to
consume
multi-cluster
services
in
order
to
consume
them.
The
name
space
has
to
exist,
but
the
controller
is
the
thing
that
creates
the.
A
A
All
right,
so
I
with
that,
with
that
context,
it
seems
reasonable
that
if
the
namespace
doesn't
exist,
it
doesn't
exist.
C
Exactly
and
and
one
of
the
benefits
there
is
that,
if
you
had
a
couple
services,
for
example,
exported
in
a
given
namespace
and
you
moved
to
a
different
cluster
in
the
namespace
exist,
you
know
sticking
with
the
idea
of
namespace
sameness.
The
same
services
would
be
imported
in
all
the
clusters
with
that
nature,
so
you
wouldn't
need
to
think
about
you
know.
Oh,
did
I
remember
to
import
this
specific
service
for
this
cluster?
It's
just
the
namespaces
are
consistent
everywhere.
B
Yeah,
no,
no,
I
mean
it
works
for
me.
I'm
kind
of
working
wondering
about
people
who
use
the
various
different
distributions
of
kubernetes
and
was
trying
to
think.
If
there's
a
distribution
of
kubernetes
that
uses
name
spaces
very
differently
is
going
to
be
unhappy
with
us,
because
I
mean
it
certainly
works
for
openshift,
which
already
treats
namespaces
this
way.
C
Awesome,
that's
great.
The
next
thing
that
came
up
was
conflict
resolution,
so
you
know
we
had
a
few
ideas
around
conflict
resolution.
You
know
not
exporting
a
service.
I
think
what
we
had
some
discussion
and
when
we
were
reviewing
the
api-
and
I
think
kind
of
the
takeaway
was
that
a
smartest,
smart
or
smarter,
or
at
least
easier
to
reason
about
implementation
would
be
to
just
use
basically
give
precedence
to
the
based
on
the
service
export
creation.
Time
span,
so
the
oldest
service
export
breaks
the
tie.
C
So
if
you
know,
for
example,
we
in
the
cap,
we
talk
about
trying
to
merge
best
effort,
merge
on
service
ports
when
creating
that
multi-closure
service.
But
if
there's
a
conflict
like
let's
say,
two
ports
with
different
ports
have
the
same
name
and
we
can't
reconcile
that
tie
goes
to
the
to
the
oldest
service
export
same
with
session
affinity,
which
may
disagree
between
clusters
and
headlessness.
C
So,
instead
of
you
know
headless,
we
had
first
talked
about
it's
headless
if
all
services
are
headless,
but
I
think
it
makes
sense
to
just
have
one
model
for
for
conflict
resolution.
We
can
just
say
if
there's
any
disagreement,
then
we'll
create
a
new
condition,
we'll
explain
the
the
conflict
and
precedence
will
be
given
to
the
oldest
export.
C
Of
course,
this
means
that
we're
taking
dependency
on
clock
synchronization,
which
is
probably
you
know,
going
to
lead
to
some
pain
points,
but
at
least
it's
really
easy
to
reason
about.
If
you
query
all
your
clusters,
you
can
see
like
a
a
value
which
clearly
dictates
which
at
least
which
one
should
be
winning
and
so
yeah
easier
to
debug.
A
I
I
I
think,
even
in
the
presence
of
clock
skew
like
it's,
it's
probably
the
the
easiest
heuristic.
I
can
personally
think
of
to
reason
about
right,
if
you're
trying
to
understand
like
reason
about
why
the
the
controller
favored
x
or
y
versus
another
one
that
was
conflicting.
Like
my
own
personal
opinion,
there.
C
Yeah
and
it's
based
on
a
value
that
actually
gets
written
to
the
resource
too.
So
it's
not
just
like
you
have
to
know
about
the
time
on
on
each
host,
like
you
can
you
can
see
the
yaml
and
who
should
be
winning
and
then
the
other
really
nice
benefit
is.
If
you
roll
out
something
accidentally
that
disagrees,
it
won't
change
anything
right
like
a.
E
C
A
Well,
but
even
so,
even
in
the
presence
of
of
cl
like,
I
think,
if
I
think
about
a
scenario
where
you're
like
I
didn't
quite
expect
that
let
me
go
read
about
this
thing
and,
like
understand
see
if
I
can
understand
why
it
worked
the
way
it
did
like
fundamentally
you're
doing
the
same
thing
with
and
without
the
presence
of
clock.
Skew
like
you'll
you'll,
learn
the
the
conflict
resolution
behavior
and
you'll.
A
Look
at
the
creation
timestamps
that
the
the
objects
report
and
it
won't
matter
if
the
this
the
it
won't
matter
at
that
step
if
the
clocks
are
skewed
or
if,
if
they're
they're
on
the
same
ntp
server
right,
it
doesn't
matter
at
that
phase,
whether
there's
clock
skew
or
not,
you
might,
you
might
be
able
to
perceive
the
presence
of
clock
skew
at
that
point,
when
you
say
hey,
I
know
that
I
created
this
one
before
this
one,
but,
like
maybe
I've
got
like
a
10
second
clock
skew
or
whatever
you
you
might
be
able
to
perceive
it
at
that
point,
but
reasoning
about
it
is
fundamentally
the
same
right.
C
In
my
mind,
yeah
the
the
edge
case
that
I
think
there's
not
really
much
we
can
do
about,
is
if
you
added
a
new
cluster
that
thought
it
was
1970
and
you
created
a
new
service
export
in
that
it
would
change
the
existing
behavior.
But
you
know,
hopefully
you
something
has
gone
terribly
wrong
and
you've
prevented
that
from
happening.
C
Cool
awesome,
so
that
seems
non-controversial
so
with
that.
I
think
you
know
conditions
on
the
service
export
had
kind
of.
I
think
we
can
kind
of
narrow
them
down
a
bit
to
just
three
different
conditions
and
this
kind
of
came
out
of
the
api
review
as
well,
so
one
useful
one
which
is
ready,
which
basically
says
at
some
point.
The
mcs
controller
saw
this
export.
The
implementation
basically
says
I
can
do
something
with
this,
so
we're
not
making
any
guarantees
that
it
exists
in
every
cluster
or
anything.
C
C
So
this
is
ready
and
it
would
be
true
once
once
successfully
read
the
sorry,
a
service,
export
invalid
condition
and
valid
service,
which
we
would
set
when
the
service
that
you're
trying
to
export
is
not
exportable.
So
either
it's
a
type
external
name
which
which
we
won't
support
or
the
service
actually
doesn't
exist.
C
C
If
there's
a
conflict
between
a
service,
so
whenever
they
disagree
and
the
behavior
like
everything
would
still
function
according
to
our
conflict
resolution
first
come
first
serve,
but
but
this
can
this
condition
would
be
set.
A
C
I
think
all
of
them
need
to
because
yeah,
because
yeah
you
don't
know
kind
of
no
matter
where
you're
looking.
You
would
want
to
know
that
something
is
wrong.
C
Yeah
so
yeah,
so
I
think
what
yeah
I
think
my
my
take
has
been
that
the
conflicts
the
messaging
on
the
conflict
should
conti.
I
I
this.
I
wrote
out
that
condition.
Message
should
contain
enough
information
to
diagnose
it
so
put
another
way
like
it
should
contain
conflicting
cluster
ids
and
you
know,
potentially,
which
one
the
the
tie
went
to.
D
Potential
for
multiple
conflicts,
though,
so
you
know
we
need
to
balanced
the
need
for
details
with
the
usage
of
giant
strings
in
there
right
right.
C
Right
so
you
could,
I
mean
one
thing
you
could
do
is
say,
there's
a
conflict
in
the
cluster
and
we
could
simplify
it
and
basically
say
tai
went
to
this
one.
Something
in
the
cluster
disagrees
with
it
versus
because
otherwise,
if
you
had
like
a
massive
deployment
with
hundreds
of
clusters,
you
know
we
get
in
a
situation
where
we
have
to
create
this
graph
of
who
disagrees
with
who
yeah.
C
And
then
I
wanted
to
go
through
the
import,
because
there
were
a
couple
things
here.
So
after
some
discussion
around
networking,
I
think
it
it
made
sense
to
move
session.
Affinity
was
originally
on
the
cluster
status
and
I
think
it
makes
sense
to
make
it
a
top-level
parameter
because
it
can't
actually
work
in
isolation
like
if
you
have,
if
you
q,
proxy,
can
technically
support
this.
C
But
a
lot
of
implementations
potentially
couldn't
support
different
kinds
of
session
affinity
for
for
different
pods
and
regardless
it's
something
that
can't
happen
in
isolation
like
if
you
have,
if
you
import
from
one
cluster
endpoints
that
have
sticky
sessions
and
another
cluster
endpoints
that
don't
you
get
the
flypaper
effect.
And
you
know
traffic
ends
up
on
all
the
ones
with
sticky
sessions.
It
just
doesn't
make
sense
and.
B
A
So,
like
I'm,
I'm
realizing
that
my
own
thinking
is
very
focused
on
situations.
Where
say
we
have
10
clusters
and
their
cluster
ip
and
we
add
one
that's
headless,
and
so
I
guess
I
guess.
We
also
also
should
be
sure
that
we're
thinking
about
like
what
happens
if
I
have
10
10
clusters,
service,
x's,
cluster,
ip
in
all
of
them,
and
I
bring
five
new
ones
in
that-
are
all
headless
like
what?
What
what
will
the
conditions
look
like
in
that
case,
it's
easy
when
it's
like
one
right
right.
C
So
I
think
what
what
we'd
say
is
that
you
know-
and
maybe
maybe
there's
a
number
that
disagree,
but
we
could
basically
say
conflict
headless
and
you
know
ty
went
to
the
oldest
cluster
and
then
maybe
we
can
say
n
clusters.
Five
clusters
disagree.
C
C
That
seems
almost
definitely
like
something
you
want
to
fix.
I
mean
you'd
want
to
fix
that
because
you
know
in
like
that
is
a
configuration
that
could
exist,
but
in
practice
I
think
a
service
is
really
either
headless
or
not,
and
so
it's
probably
a
mistake
like
if
something
was
cluster
ip,
it's
not
going
to
become
headless
or
vice
versa.
A
So
in
in
a
in
a
situation
where
you
have
like,
like
x,
number
of
clusters
that
have
cluster
ip
and
y
number
of
clusters
that
have
headless
you're,
finding
the
oldest
in
each
of
those
bags
and
comparing
timestamps
right.
A
C
Right,
yeah,
yeah,
yeah
so
and
you
know
the
specifics
that
could
could
be
different.
Maybe
maybe
it's
like
five
clusters
have
this
service
type
and
are
conflicting
with
this
one
that
has
this
one,
this
service
type
and
then
we
can
say
you
know.
Maybe
the
five
clusters
is
is
like
two
cluster
ids
and
then
three
more
or
something
like
that
like
something
that
can
help
you
actually
start
figuring
out
where
to
look
but
yeah.
C
I
I
don't
know
that
we
need
to
strongly
define
that
like
how
that
message
is
beyond
saying
that
it
should
save,
probably
the
number
of
clusters
that
disagree
in
which
one
the
tie
went
to.
So
you
can,
you
can
look
in,
but
yeah
we
could.
There
could
be
more
information,
but
either
way
I
think
yeah
in
that
situation.
It's
it
would
be
roughly
the
same
message
whether
there's
one
that
disagrees
or
tend
to
disagree.
C
A
We
might
want
to
think
about
bounding
the
length
of
the
message
yeah
and,
like
some
smart
calculation,
of
what
the
right
text
is
so
that,
if
it's
like,
it
always
fits
into,
I
don't
know
120
characters
or
whatever
and,
like
your
available
space
dictates
how
many
clusters
you
can
see
in
there
versus
like
there
were.
We
just
know
that
there
were
200
of
them
and
you
know
sufficiently
large
number
right.
C
Right
yeah,
I
was
thinking
that
like
since
it
will
be
the
controller
like
the
implementation
that
actually
sets
that,
like
we
should,
we
should
describe
a
best
practice
for
that
message,
but
I
don't
know
that
we
like,
I
was
thinking.
We
could
probably
avoid
like
actually
creating
a
format.
C
C
Cool
yeah,
okay.
I
will
add
that
as
soon
as
we're
done
here
today
awesome-
I
I
okay.
So
then
I
guess
the
structure
of
service
import,
I
think
is,
is
key.
So
is
another
big
thing
to
kind
of
talk
out
here,
but
yeah.
Basically
thinking
that
the
spec
should
be,
you
know
we
kind
of
talked
about
this
before
and
there's.
C
I
think
there
might
be
some
open
questions
here,
but
the
spec
would
basically
have
all
of
the
service
information,
and
then
you
know
really
all
that
needs
to
be
for
now
at
least
well,
you
know
the
topology
api
is
still
being
sorted
out
on
the
cluster
in
the
status
is,
is
basically
the
name
of
the
clusters
that
we
would
use
for
for
headless
services.
C
C
Status-
and
so
I
just
wanted
to
give
anyone
a
chance
to
bring
that
up,
I
mean,
I
think
for
me
like
this.
This
feels
a
lot
like
service.
This
is
having
it
on.
Spec
is,
is
kind
of
familiar,
whereas
the
clusters
are
are
strictly
derived
information,
but
I
guess
the
counter
argument
was
that
all
of
this
is
actually
derived
information,
so.
D
The
modern
world
of
random
back
offs,
I
think
spec,
is
appropriate
and
here's
why
you
could
conceivably
have
different
implementations
that
could
actually
be
using
status
to
update
you
is
to
the
local
status
of
import,
which
is
different
than
what
you
wanted
to
import.
In
this
case
it
is
spec
you're
saying
this
is
what
you
are
importing
right.
A
Yeah,
I
agree,
and
I
was-
and
I
was
going
to
ask
like
so
say
that
you
say
that
you
like
are
rolling
a
change
to
the
service
like
the
service,
that's
being
exported
and
you're,
adding
a
new
port
like
would
you
expect
to
have
to
change
the
service
import
spec
to
see
that
new
port.
C
C
D
Yeah
I
have
so
we
we
talked
a
little
bit
about
this
at
the
api
review.
I
have
concerns
that
if
you
take
an
ip
here-
and
you
expect
q
proxy
to
start
capturing-
that
ip
address-
that
anybody
who
can
create
a
service
import
can
who
can
create
a
service
import,
spec
specifically,
can
essentially
capture
any
ip
address,
which
seems
like
a
bad
thing.
It's
not
worse
than
other
problems
that
service
has,
but
I'm
not
sure
that
we
should
recreate
the
exact
same
problem.
D
The
difference
here,
though,
is,
I
don't
think
we
expect
normal
cluster
users
to
be
creating
service
imports.
That
should
be
a
largely
reserved
for
administrators
sort
of
operation.
So
you
know
jordan,
sort
of
convinced
me
that
that's
probably
okay.
We
need
to
be
really
clear
that
you
should
not
give
the
owner
of
the
namespace
the
ability
to
write
to
their
own
import,
specs
right.
C
D
So
that
said,
I'm
still
chewing
on
whether
we
can
make
effectively
ip
be
a
reference
to
a
cluster
object.
That
would
be
the
effectively
the
grant
of
the
ip
address
and
the
way
I'm
thinking
about
it
here
is
literally.
The
resource
could
be
named
the
same
as
the
ip
address,
so
you
wouldn't
need
to
change
an
api
like
this,
like
it
could
still
be
ipstring,
but
the
controller
implementation
or
the
cube
proxy
could
say.
D
C
C
A
E
A
So
the
and
you
can
express
an
ipv6
address
with
the
colon
format,
can't
you
that
that
would
be
a
valid
kubernetes
resource
name,
even
if
they're,
like
colons,
that
occur
right
at
one
right
after
another.
D
Yeah
we
would
have
to
normalize
that,
and
so
consumers
who
wanted
to
validate
this
would
have
to
turn
the
sort
of
shortened
v6
form
into
the
sort
of
expanded,
v6
form,
which
has
a
whole
lot
of
zero.
Basically,
two
colons
together
means
there's
a
bunch
of
zeros
here
and
you
can
use
the
rest
of
the
of
the
address
to
figure
out
how
many
zeros
that
was.
We
would
just
want
to
expand
that
when
you
were
doing
the
check,
I
think.
C
Awesome,
I
guess
so
with
that,
what
what
left?
What
is
left,
do
we
think,
before
implementable,
before
we
get
rolling
on
this.
A
A
Sorry,
I'm
experiencing
complete
failure
to
brain.
There
was
a
note
about
cluster
id
plan
for
alpha
to
beta,
I
think
beta
to
ga,
as
as
I'm
thinking
about
it
now,
while
I'm
talking
needs
to
be
more
than
just
a
plan
like
we
have
to
have,
I
think
cluster
id
would
need
to
be
whatever
the
ga
version
of
that
is
if
our
api
is
going
to
depend
on
it
yeah,
and
then
I
had
a
question
also
about
do.
C
Yeah,
so
in
thinking
about
this,
like,
I
guess
right
now,
it
seems
like.
Yes,
I
think
we
we
have
thought
about
enough
that
this
could
this
could
go
to
ga
and
be
useful,
but
what
I'm
really
excited
about
is
getting
this
to
alpha
and
getting
getting
it
some
use
so
that
we
can
yeah
see
how
true
that
is.
D
A
Okay,
so
it
it
sounds
like
we're,
not
hearing
any
reason
not
to
call
it
implementable,
as
we
update
your
pr.
So
maybe
you
can
just
like
update
that
aspect
of
it
in
the
pr
too.
C
Cool
yeah
I'll
update
that,
with
with
the
comments
from
today
and
I'll
mark
it
implementable
in
the
pr
and
then
and
then
we
can
get
rolling
with
it
cool.
Also.
On
that
note,
I
updated
my
demo
repo
with
with
a
working
entry
crd-based
key
proxy,
build
that
if
you
use
the
multi-cluster
services
flag,
we'll
actually
enable
this
so
that
that
aspect
of
it
actually
works
as
well.
A
Awesome
all
right!
Well,
I
think
that
we
are
at
time
for
now
thanks
a
lot
for
for
the
presentation
today,
jeremy.