►
From YouTube: Kubernetes SIG Multicluster 2021 Mar 9
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
Oh
problem,
verifying
my
identity,
all
right,
all
right,
laura,
you
usually
come
with
a
deck.
I
suppose
you
probably
want
me
to
enable
screen
sharing.
A
C
Good
excellent
and
I
was
able
to
hear
you,
jeremy,
perfect,
all
right
yeah,
so
we'll
just
give
it
a
second
here
hope:
everybody's
having
nice
weather
today,
beautiful
sunny
skies
here
in
raleigh
north
carolina.
It's
a
cool
66
degrees
feels
great,
not
a
cloud
in
the
sky.
C
C
D
I'm
a
I'm
a
snowboarder
as
of
a
couple
years
ago,
so
snow
is
actually
welcome.
So
we
get
nice
fluffy
stuff.
C
Excellent,
all
right,
why
don't
we
get
started?
Welcome
everybody
to
the
tuesday
march
9th
2021,
meeting
of
sig
multi-cluster,
laura
you're!
First
on
the
agenda,
why
don't
you
go
ahead?
Hey
all
right!.
B
I'm
going
to
share
my
screen
over
here
and
I
have
two
things
to
bring
up
and
I'm
going
to
start
with
cluster
id,
which
is
this
kep,
that
I've
been
working
on
for
a
while,
and
thank
you
to
paul
and
jeremy
for
making
some
comments,
definitely
still
open
for
more
comments
from
other
people
so
come
on
by
and
in
particular
I
wanted
to
highlight
this
comment
that
paul
had
made
about.
I
had
put
in
like
a
little
example.
B
Cluster
claim
for
both
cluster
set
case
that
io
and
idk
said
io
with
api
version
multicluster.kates.io,
which
is
assuming
that
we
can
use
the
api
group
case.io,
which
I
think
is
possibly
related
to
other
conversations
about
like
just
how
we,
I
guess
we
want
to
manage
this
api
entry
or
not
entry,
but
I
wanted
to
break
the
ice
on
that
and
see
if
there
were
some
opinions
here.
C
So
the
what
what
was
in
my
head
was
that
the
cluster
id
resource
seems
like
it
may
have
applications
beyond
multi-cluster.
So
like
question
of
kate's
dot,
io
aside
multi-cluster
as
the
like.
C
Name
coordinate
seems
like
we
might
want
to
think
that
over
and
then
kate
stadio,
like
someone
please
correct
me
if
I'm
wrong,
I
think
that
that
you've
gotta
be
in
tree
to
go
for
that
that
domain
name,
if
I'm
not
mistaken,
and
I
might
be.
F
F
You're
beholden,
yeah,
you're
beholden
to
me
and
jordan
and
the
few
other
people
who
feel
so
inspired
to
pick
knits.
In
this
case,
I
don't
think
that's
a
particular
hardship,
because
I
think
we
want
this
to
be
generally
applicable,
so
we
don't
want
to
go
off
and
do
crazy
things,
and
this
is
a
really
simple
api
by
comparison
anyway.
F
The
practical
side
of
it
is
it's
easier.
It
has
been
in
the
past,
I
believe,
easier
to
manage
all
of
those
apis
in
a
single
repository,
but
I
don't
think
that's
literally
required
and
I
honestly
have
no
idea
what
the
final
pattern
that
we
arrived
at
for
crds.
F
That
are
using
the
case
that
I
owe
suffix
and,
like
the
question
of
who,
who
loads
the
crd,
what
happens
if
the
crd
gets
versioned
and
like
what
is
it
associated
with?
I
honestly
have
no
memory
of
what
we
or,
if
we
decided
on
anything
there.
F
There
is
some
of
the
storage
stuff
was
doing
crd,
but
I
don't
know
what
the
where
they
landed.
So
we
should
probably
talk
with
like
michelle
or
someone
who
will
certainly
know
what
the
current
state
is.
A
C
I
don't
hate
it
either.
The
only
thing
that
I'll
just
like,
as
we
make
the
stone
soup
of
requirements
for
this
name,
that
I'll
just
throw
this
ingredient
into
the
bubbling
cauldron
is
probably
it
should
have
some
relation
to
the
eventual
name
of
the
resource
that
we
choose.
F
F
B
What
is
the
winner
haven't,
sent
out
the
official
email
of
data,
but
that's
the
standings.
F
F
B
Yeah-
and
I
think,
there's
also
a
possibility
here
like
even
if
we
introduce
other
kinds
that
aren't
cluster
property,
calling
this
cluster
property
or
properties.kates.io
or
cluster.
I
don't
know
if
that's
already
taken
cluster.case.io,
just
in
a
sense
that
this
whole
idea
is
about
arbitrary,
like
metadata
about
your
cluster.
B
C
Yeah,
here's
my
here's,
my
throw
it
against
the
wall.
About.Kates.Io.
B
Very,
I
don't
know
web
2.0.
F
I
don't
yeah,
I
don't
hate
it,
it
falls,
I
think,
on
the
other
side
of
cute
for
me,
but
I
don't
hate
it
outright.
F
F
Let's
not
make
this
a
huge
deal
like.
I
don't
think
we
need
to
call
out
another
survey
on
this
right.
Yep.
B
F
F
Okay,
we
should
probably
do
that
and
and
like
first
let's
talk
to
storage
and
figure
out
what
we
want
to
do
around
crd,
but
then
bringing
it
to
cigar
and
say:
look
we
find
general
utility
for
this
within
within
the
this
multi-cluster
space
and
the
cluster
life
cycle.
Folks
are
interested
too
so
we'd.
You
know
like
to
consider
bringing
it
in
tree
and
and
going
with
the
case
that
I
owe
suffix
and
we're
open
to
names,
but
here's
the
ones
that
we're
proposing.
D
C
Is
one
that
came
to
mind
for
me,
but
I
didn't
know
if
there
would
be
like
a
name
collision
with
the
meta
types.
Well,
it'll
be
confusing.
B
Meta
meta
all
right.
Well,
I
think
that's
also
a
good
highlight,
for
I
do
have
this
section
down
here
that
I
think
is
probably
going
to
be
topical,
especially
when
I
go
talk
to
stick
arch,
so
I
definitely
would
love
me.
You
know
people
who
feel
things
about
this.
To
give
this
a
look,
a
deep
look,
because
I'm
going
to
be
talking
to
people
who
have
even
more
opinions
about
this.
I
think.
F
So
yeah
talk
talk
to
michelle
about
the
realities
or
or
if
you
talk
to
her
and
if
it's
useful
invite
her
to
one
of
these
meetings.
Maybe
if
you
think
it
needs
to
go
beyond
that
and
see
what
their
feeling
is
on
the
repercussions
of
those
decisions.
I
know
that
they
were
one
of
the
very,
very
first
adopters
and
so
they've
had
to
deal
with
all
of
the
bumps
cool.
B
Good
then
they
will
be
good
to
talk
to
you
about
the
bumps
great
okay.
So,
besides
the
general
still
open
for
comments
lifestyle,
I
think
that
ties
up
all
the
current
comments
and
gives
me
a
path
forward
for
cluster
id
here,
so
tyty,
and
then
next
topic
that
I
would
like
to
discuss
is
about
multi-cluster
dns
aka.
B
You
thought
I
was
done
talking
to
you
about
caps
with
this
one,
but
the
answer
is
no,
because
I
want
to
talk
about
multi-clustered
units
also,
so
I
made
this
document,
and
this
is
something
that's
specifically
in
the
mcs
api
kept
as
blocking
to
graduation
to
the
next
step.
To
figure
out,
you
know
a
specification
for
multi-cluster
dns.
So
that's
why
I
want
to
open
that
conversation
here.
So
there's
a
bunch
of
you
know
background
and
stuff
in
here,
and
I
just
made
this
public.
B
You
know
yesterday,
so
I
think
this
is
mostly
just
to
introduce
it
and
tell
people
to
give
it
a
look,
and
one
thing
I
want
to
highlight-
which
I
also
highlighted
in
this
comment
here,
is
that
a
lot
of
the
meat
of
the
proposal
is
literally
lifted
from
the
dns
spec.
That
already
exists
for
a
single
cluster
like
records
for
a
service
with
cluster
ip,
like
a
quad,
a
record
server
record,
blah
blah
blah
and
then
to
try
and
make
it
easier
to
read.
B
I've
highlighted
in
orange,
where
I've
like
materially
deviated
from
the
pros
of
the
original
specification,
so
that
it's
hopefully
more
clear
when
you're
reading
this,
you
know
what
I'm
drawing
from
existing
and
what
I'm
you
know
doing.
The
new
multi-clustered
lifestyle
on,
and
it's
explained
in
the
background
as
well.
But
the
idea
is
that
all
of
these
record
proposals
would
you
know
fit
in
and
like
section,
you
know
four
point,
whatever
three
point
whatever
like
it
would
be
another
addition
into
this,
like
this
dns
spec
that
already
exists.
B
So
hopefully
that
gives
some
intro
and
is
my
request
to
please
take
a
look
at
it
and
then,
as
people
put
comments,
I'm
gonna
try
and
do
a
couple
rounds
here
of
you
know,
answering
them
or
giving
more
presentations
here
to
you
know,
get
some
more
people
on
board
I'll
also
take
any
original
thoughts
of
any.
Like
our
initial
thoughts,
I
guess
if
people
have
already
given
a
look
or
have
feelings.
A
B
Totally
I
do
so
from
the
original
spec.
There
are
already
specified
a
quad,
a
records
for
cluster
ips,
serv
records,
ptr
records
and
then
the
same
thing
for
headless
services,
and
so
that's
that's
what
I'm
deriving
from
and
then
specifically
I'll
talk
about
for
cluster
ip
services.
B
I
think
probably
the
most
important
thing
is
this.
First
one
here
is
kind
of,
like
the
representative
record
format
that
basically
there
will
be
a
cluster
set,
ip,
that's
assigned
by
the
mcs
controller,
and
so
there
should
be
a
dns
record
that
for
at
service.namespace.svc.clusterset
zone,
which
should
be
clusterset.local
as
opposed
to
cluster.local
for
that
cluster
set
ip.
So
the
example
here
might
be.
If
you
had
this
multi-cluster
service
name
in
this
namespace,
the
svc
is
normal
lifestyle.
B
Know
embarrassing
cluster
set
dot
local,
so
here
you
can
also
see.
I
spelled
it
right
down
here
and
spilled
it
wrong
up
here.
So
my
bad,
so
that's
the
the
main
case
for
the
cluster
ip
services,
cluster,
ip
type
services
and
then
scrolling
down
a
while
for
headless
services.
B
There's
kind
of
an
important
point
here,
which
is
while
following
the
original
spec
for
every
ready
endpoint
for
a
headless
service,
should
have
a
dns
name
following
that
same
pattern,
scrolling
down.
There
also
should
be
the
possibility
here
that
we
take
just
like
this
is
also
derived
from
what
tesla
services
do
now,
but
there
should
be
a
record
format:
hostname
cluster
id
bing,
bing
bang.
This
is
the
important
you
know.
B
Change
here,
service,
ns,
svc
cluster
set
zone
for
each
ready,
endpoint
for
headless
service,
so
like
possible
effect
here,
would
be
if
this
was
the
cluster
id,
because
this
is
like
a
cube
system,
namespace,
uuid
type
thing,
and
maybe
the
it
has
this
host
name.
Then
we
would
be
able
to
disambiguate,
even
if
there
was
something
with
even
the
same
host
name
in
the
same
name
space
by
this
cluster
id.
If
there's
a
headless
service
in
in
multiple
clusters,.
F
So
I
apologize
that
I
have
not
been
able
to
read
this
yet
once
I
come
out
the
other
side
of
code
freeze
and
perfweek,
this
will
be
high
on
my
list.
All
the
all
the
google
people
are
laughing.
I
have
so
obviously
I'm
very
close
to
the
old
the
existing
dns
schema.
So
I
know
where
all
the
bodies
are
buried.
I
I
hate
it.
F
And
yet
my
inclination
is
probably
that
consistency
with
it
is
better
than
changing
it
here.
Even
though
I
want
to
change
the
main
schema,
we
have
to
figure
out
a
way
to
version
that
and
make
it
opt-in
so
that
people
can
migrate,
and
we
have
some
proposals
around
that.
But
given
how
immature
those
are,
my
inclination
is
to
say
that
yeah
following
the
pattern,
even
though
the
svc
is
completely
useless
in
this
context,
is
probably
the
right
thing
to
do.
F
I
want
to
say
the
biggest,
the
biggest
biggest
one
of
the
biggest
problems
in
the
original,
the
in
cluster
dns,
was
sort
of
over
specking
stuff
in
a
way
that
we
weren't,
really
sure
was
going
to
be
useful.
So
I
want
to
be
really
careful
as
we
look
through
this,
that
we
don't
specify
things
that
we
don't
really
need.
Yet
I
don't
know
if
that
applies
or
not,
but
I
want
to
think
hard
about
that
specifically
like
the
stuff.
The
headless
names
is
useful
in
very,
very
limited
contexts.
F
I
want
to
be
careful
with
that
and
the
other
one
that
I
think
actually
got
wrong
and
we
will
have
to
fix
in
the
regular
schema.
Is
this
notion
of
ready
endpoints?
F
I
think
there
is
still
value
if
there
is
an
a
name
assigned
to
an
endpoint
there's
a
value
in
having
it
assigned,
even
when
the
endpoint
isn't
ready,
just
not
being
part
of
the
rr
set
for
the
larger
name
and
that's
something
we're
going
through
right
now
with
service
and
making
drains
cleaner
and
is
respecting,
not
ready
and
it
not
ready
is
different
than
non-existing,
and
we
need
to
be
clear
about
that,
but
otherwise
I
think
you're
taking
the
right
approach.
F
A
B
Okay,
well,
definitely
just
even
gave
me
some
tips
like
about
the
ready,
endpoint
drama.
That's
going
on
definitely
hear
you
on
the
like.
I
knew
there
was
something
going
on
with
y'all
about
wanting
to
change
how
dns
works,
but
I
definitely
wasn't
prepared
to
solve
that
problem.
F
Yeah,
no,
it's
it's
not!
It's
not
fair
to
lump
that
on
you
and
honestly,
even
though
all
the
problems,
the
worst
problems
that
we
have
with
the
the
the
pod,
it's
called
the
single
cluster
dns
schema,
don't
really
apply
here,
because
I
don't
expect
users
to
be
setting
up
their
search
paths
into
clusterset.local,
but
actually
they
can,
and
so
this
will
be
subject
to
some
of
the
same
boo-boos
that
we
made
there,
which
you
know
honestly
if
we're
gonna
fix
it,
we'll
fix
it
in
both
places.
B
Gotcha,
okay,
then,
I
think
my
final
call
to
action
is
every
to
everybody
is
to
give
this
a
read,
feel
free
to
comment
and
then
I'll
also
come
back
next
week
and
give
sort
of
a
little
like
presentation
about
the
basics
on
it
too.
And
then,
if
there
are
any
comments
between
now
and
then
I
can
also
surface
those
at
that
time
for
discussion.
E
B
Great
okay,
I
think
those
are
my
two
topics.
C
Okay,
so
the
next
one
I
put
on
here,
I
didn't
hear
any
dissent
on
archiving
cluster
registry
repo
from
the
mailing
list.
Please
somebody
correct
me
if
I'm
wrong,
but
I
think
that
we
are
that
we
are
achieving
lazy
consensus
now
to
do
that.
So
my
only
question
to
this
group
is
like:
do
we
do
an
explanatory
commit
to
that
before
we
archive
it.
C
Okay,
I
haven't,
I
haven't
archived
something
in
cube
before
so
I
will
I'll
see
what
I
can
dig
up
around
like
the
right
way
to
do
that
and
cut
tombstone
commit
to
that
repo.
C
All
right,
the
only
other
thing
I've
got
is
just
that
there
that
trojan
and
rainbow
and
mango
are
kind
of
working
on
some
of
the
stuff
in
work.
Api
repo,
there's
been
like
a
client
package
that
has
gone
in
the
proposal
has
gotten
translated
into
a
markdown.
So
I
am
going
to
ask
trojan
if
you
could
give
a
readout
on
that
in
our
next
meeting,
and
we
can
talk
as
a
group
about
like
what
what
direction
do.
C
We
feel
it
needs
to
go
in
to
get
it
to
kind
of
like
an
alpha
state
and
that
kind
of
thing
uf.
For
me,.
A
A
F
Yep
100k
issues
in
pr's,
pretty
nice
yeah,
just
pat
yourselves
on
the
back
everybody
thanks
for
participating.
It's
been
a
hell
of
a
ride
so
far,
but
I
claimed
dibs
on
one
million.
F
Yes,
I
told
him
that's
what
go
down
in
the
annals
of
cube
trivia,
the
next
time
there's
a
somebody
wants
to
do
a
kubernetes
trivia
game.
They
always
ask
that
who
has
commit
number
one
and
now
they'll
be
able
to
ask
who
has
commit
number
one
hundred
thousand.