►
From YouTube: Kubernetes SIG Multicluster 2022 Apr 5
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
A
All
right,
it's
three
after
let's
get
started,
welcome
everyone
to
the
tuesday
april
5th
at
2022,
sig
multi-cluster,
meeting
laura.
It
looks
like
you
are
our
agenda
today,
so
I.
C
Hopefully
that
is
okay
there,
okay,
all
right!
So
I
wanted
to
talk
today
about
the
about
api
and
I
have
a
little
demo
to
show
and
also
some
slides.
I
want
to
share
in
particular.
There
are
some
discussion
points
I
kind
of
want
to
ask
about
the
group
too
later,
so
let
me
slide
over
here
to
these
slides,
which
are
linked.
C
First,
I
think
here
we
go
okay,
so
let
me
re-establish
what
the
about
api
is,
formerly
known
as
cluster
claim
cluster
id,
and
you
may
have
also
heard
the
word
cluster
property.
So
all
of
these
are
referring
to
fundamentally
the
same
thing,
so
just
triggering
anybody's
memories.
A
C
And
overall,
this
is
from
kept
2149
cluster
id
for
cluster
set
identification,
and
it
is
now
implemented
per
that
kep
in
this
repo
I'm
going
to
show
a
demo
of
this
later,
but
that's
part
of
the
big
reveal
of
today
and
just
to
remind
everybody.
This
is
a
cluster
scoped
crd
that
basically
maps
a
name
to
a
value,
and
there
are
specific
usages
of
this
that
are
needed
for
mcs,
which
is
why
we
worked
on
this
in
sick
multi-cluster.
C
In
particular,
this
one
called
id.kates.io.
Here's
two
potential
examples
of
it.
This
is
a
clustered
property
in
the
about
api
that
identifies
a
cluster
by
its
name.
So
this
is
a
way
that,
like
a
cluster,
can
ask
itself
what
its
name
is.
There's
a
lot
of
sort
of
background
in
the
kept
to
about
like
how
what
this
this
property
should
look
like,
and
what
behavior
it
should
exhibit
to
be
conformant
with
this
kep
and
in
particular,
for
its
usages
and
mcs.
C
But
I
won't
go
into
too
many
details
about
that.
This
cup
also
specifies
that
there
should
be
a
well-known
property
called
clusterset.case.io
that
identifies
what
cluster
set.
This
cluster
belongs
to
so
again
the
cluster
is
self-aware
of
what
cluster
set
it
has
membership.
In
again,
the
cab
has
a
bunch
of
details
about
exactly
how
that
resource
should
behave,
but
as
mentioned,
and
these
last
bullet
points
are
just
like
there's
a
bunch
of
reasons
that
we
need
this
for
mcs,
and
this
is
how
we
have
implemented
it.
C
So
that's
a
little
bit
of
background,
and
one
other
background
point
I
want
to
make
is
that,
even
though
there
are
going
back
a
slide,
really
quick,
these
specific
named
properties
id.case.io,
which
has
to
have
like
certain
behaviors
and
clusters,
set
that
case
io
that
has
to
have
certain
behaviors
that
is
specified
in
this
cap.
C
The
cap
also
opens
up
the
floor
for
people
to
use
the
crd
to
store
any
other
resources
with
any
other
name
as
long
as
they
follow
some
general
guidelines,
which
is
basically
that
they
can't
conflict
with
the
well-known
ones,
have
to
use
a
suffix
and
can't
use
the
reserved
case
study
or
kubernetes
io
suffixes,
so
just
want
to
bring
up.
We
I
in
case
it's
an
interesting
point
of
discussion
here
or
just
like
as
knowledge.
This
is
something
that
could
become.
C
You
know
arbitrary
properties
that
any
specific
implementation
of
anything
decides
to
use,
and
they
should
suffix
it
by
their
something
that
is
informative
as
to
what
their
implementation
is,
so
that
it
can,
you
know,
not
get
conflicted
by
anybody
else.
C
One
that
has
been
discussed
in
this
forum
is
a
clustered
property
that
identifies
a
cluster's
primary
network
for
multi-network
purposes.
This
is
still
in
conversation
elsewhere
in
tbd,
but
just
I
wanted
to
bring
it
up
here
as
like.
C
This
is
an
example
usage
of
the
about
api,
or
you
know
something
that
isn't
in
the
original
cap,
but
may
need
to
or
want
to
be,
like
escalated
as
like
a
official,
well-known
property,
or
just
like
as
info
that
this
is
kind
of
a
place
that,
if
you
needed
random
metadata
about
a
cluster,
this
is
potentially
a
place
to
put
it,
and
one
other
thing
too,
to
mention
is
that
this
value
field
is
just
a
string
with
a
length
limit
and
no
other
validation
on
it.
C
We
will
talk
a
little
bit
about
that
length,
limit
validation
itself
too,
but
it
can
be
of
any
form
again.
The
well-known
properties
have
some
rules
and
individual
implementations
can
make
up
whatever
rules
they
want
about
how
their
what
their
value
should
look
like,
but
as
a
crd
that
doesn't
establish
any
specifics.
C
Okay,
quick
demo,
major
major
shout
out
to
ishmeet,
who
I
believe
is
in
the
call
I
am
just
a
messenger.
She
did
all
the
work
for
actually
implementing
this.
So
super
grateful
to
that
and
I'll
just
give
you
a
quick
look
over
here.
So
as
mentioned
this
is
accessible
you
can.
You
can
grab
it
from
the
kubernetes
about
api
repo
here.
So
I
just
pulled
it
up
in
my
vs
code
over
here-
and
I
have
this
is
so
small.
Sorry.
C
There
we
go-
hopefully
that's
visible
enough,
so
I'm
in
my
I
pulled
this
repo
here
and
I
also
am
have
a
kind
cluster
running
locally.
That
right
now
is
super.
Clean,
doesn't
have
any
crg's
installed,
so
this
crd
is
constructed
with
cube
builder.
So
if
you
have
experience
with
cue
builder,
it
has
all
the
normal
like
make
commands
from
all
other
q
fields.
C
Here,
cube
builders,
crds,
and
so
we
can
run,
make
x,
install
and
as
long
as
you
have
your
customized
and
your
controller
gen
binaries
installed
properly,
then
we
can
see
that
now
this
crd
exists.
So
this
is
the
about
api,
crd
and
again
back
here.
You
can
look
at
the
source
code
on
github
as
well,
but
this
is
the
actual
definition
that
is
being
applied
there
and
then
one
other
thing
that
I
want
to
just
show
concretely
is.
C
I
first
have
to
remember
what
directory
it's
in
it's
in
config
samples,
yes,
okay,
so
I
have
a
not
eye
in
the
repo
already
committed
in
the
repo.
There
is
this
example:
one
custom
resource,
so
we
can
apply
this
in
here.
This
looks
like
sorry.
I
can't
talk
and
type.
At
the
same
time,
it
seems
today
roberty.
C
So
we
can
see
that
it's
created
this
cluster
property.
This
happens
to
be
one
that
is
using
this
name
id.case.io.
So
the
whole
idea
here
is
that,
anytime,
to
like
this
api,
you
could
request
hey.
What
is
my
cluster
property
with
this
well-known
name,
and
you
can
get
this
let's
get
this
out
here.
You
can
get
this
resource
back,
which
has
this
value,
which
is
whatever
the
name
is,
so
that's
like
the
whole
purpose
of
this
api
and
especially
in
terms
of
this
cluster
being
aware
of
its
own
name.
C
Okay.
So
I
think
that
is
what
I
wanted
to
show
there.
So
I'm
going
to
go
back
to
my
slides
here,
oops
slideshow.
C
Okay,
so
super
happy
that
that
now
exists
you
can
play
with
it.
You
can
mess
with
it
and
also
importantly,
it's
now
the
cup
is
implementable,
and
this
is
basically
the
alpha
api
for
it
for
the
purposes
of
the
mcs
api
progressing
itself
to
beta
and
eventually
ga
which
is
dependent
on
this
okay.
So
for
this
kep
itself
for
the
plus
for
the
about
api's
cap,
it
has
its
own
graduation
criteria,
separate
from
the
mcs
api's.
C
Criteria
because
they're
two
different
kepts,
so
I
want
to
dig
into
a
couple
of
these
and
also
I
will
save
some
time
at
the
end,
to
just
revisit
these
graduation
criteria
generally.
If
we
need
to
but
there's
three
alpha
to
beta
graduation
criteria,
one
is
related
to
what,
if
this
id
can
must
be
strictly
a
valid
dns
label
or
is
allowed
to
be
a
subdomain.
C
I
don't
think
we
totally
finalized
that
so
I
kind
of
want
to
come
back
to
this
at
some
point.
We
have
ancient
ancient
conversations
about
that
like
way
last
year
and
it
also
intersected
a
little
bit
with
a
sig
cluster
life
cycle
and
cluster
api.
So
I
kind
of
want
to
revisit
those
notes
before
I
say
anything
too
unintelligent
about
that.
C
C
In
the
end,
we
decided
to
go
for
crd,
which
is
how
it's
implemented
now,
and
so
I
considered
this
tabled
but
happy
to
talk
about
that
more
later
as
well,
and
then
the
two
or
sorry,
not
two,
but
this
last
one
here
I
do
have
a
separate
slide
on
which
came
up
during
api
review
for
this
alpha
implementation
related
to
that
length
limit.
So
we
have
a
length
limit
on
the
value
for
any
cluster
property
and
our
means
to
enforce.
C
It
are
kind
of
dependent
on
the
crd
implementation,
and
there
are
some
decisions
that
we
can
make
about
how
we
want
that
to
work,
and
then
one
other
thing
kind
of
going
on
in
the
background
is
later
on
when
we
want
to
move
cluster
id
to
ga
or
cluster
property
of
the
ga,
we
need
at
least
one
headless
implementation
using
cluster
id
for
mcs
dns.
This
is
available
to
be
worked
on.
C
If
anybody
is
interested
in
a
coding
project,
there
is
a
already
a
core
dns
plugin
for
multi-cluster
dns
that
isn't
leveraging
cluster
cluster
property
yet
because
it
didn't
exist
and
therefore
it
doesn't
create
pot
dns
records.
So
we
would
like
to
integrate
that,
and
then
that
is
also
a
ga
blocker
specifically
for
this
cap.
So
just
heads
up
that
that
is
a
open
project
and
I'm
gonna
jump
into
the
nitty
gritty
about
this
length.
Validation,
so
that
I
can
get
some
opinions
from
here.
C
So
through
the
course
of
the
api
review
and
there's
many
comments
and
lots
of
empirical
data
and
gists
that
were
discussed
as
well,
but
the
value
for
cluster
properties.
We
originally
wanted
to
limit
to
128
000
bytes,
but
the
open
api
validation
that
the
crd
depends
on
is
only
capable
of
determining
a
max
length
based
on
unicode
code
points.
C
This
is
not
the
same
as
bytes,
because
it
depends
on
the
encoding
of
the
in
the
first
place,
and
the
note
down
here
sort
of
mentions
that
how
this
turns
into
actually
bite
lengths
varies,
but
in
the
worst
case,
using
utf-32,
which
is
the
least
space
efficient
allowable
encoding
for
open
api
v3
specification
would
be
512
kilobytes
because
that's
128k
times
four.
So
this
is
how
it's
written
right
now,
and
this
is
how
the
validation
on
the
alpha
version
works.
C
It's
based
on
you
code
points,
but
there
is
another
option,
called
cell
validation,
or
maybe
we
call
it
ceo.
I
don't
know,
but
I
think
cell
sounds
cooler
and
I
want
to
compare
and
contrast
these,
including
their
their
potential
cons.
So
right
now
we're
using
this
open
api
v3
structural
schema.
This
is
kind
of
like
a
seems
really
lingo-y
word,
but
basically
is
that
there's
some
validation
fields
that
are
based
on
open
api
v3
that
you
can
set
but
they're
restricted
by
the
kubernetes
implementation.
C
So
kubernetes
only
allows
like
a
certain
subset
of
the
total
possible
open
api,
v3
validation
fields,
and
there
is
a
max
length
field
which
is
supported
by
kubernetes,
but
for
string
types
as
mentioned,
it
only
works
on
uncode
code
points
and
again
back.
If
you
go
back
to
the
history
of
this
kep
and
look
at
all
the
comments,
there's
a
lot
of
like
empirical
testing
of
that
to
confirm
that
that's
the
case.
C
One
nice
thing
about
open
api
v3,
structural
schemas
is
that
max
length
and
all
the
other
things
and
structural
schemas
are
available
since
crd
went
v1,
which
was
kubernetes
116..
So
it's
just
in
your
vanilla,
kubernetes
version
since
116.,
and
I
do
want
to
highlight
that
there
is
something
called
bytes
type
in
this
open
api
v3
specification,
including
the
kubernetes
restricted
subset.
C
But
what
it
actually
means
is
some
base64
encoded
data
so
like
it
has
to
be
ascii
and
if,
for
example,
we
were
to
use
the
string
type,
we
would
be
expecting
people
to
write
their
like
arbitrary
values
in
base
64,
which
is
super
unusable.
Because
then,
if
you
do
like
a
cube,
cuddle
get
cluster
property
id.k.I.o,
and
it's
like
this-
you
don't
really
get
to
use
it
directly.
You
have
to
decode
it.
So
that's!
C
What's
going
on
with
open
api
v3
over
on
the
cell
validation
side,
this
is
a
new
type
of
validation
for
crds.
That
has
a
way
more
expressive.
Syntax.
It's
particularly
useful
if
you
need
to
make
comparisons.
Cross
field
like
the
the
open
api
v3
structural
schema
super
can't
like
compare
two
fields
to
each
other,
but
cell
validation.
Can
I
don't
think
we
super
care
about
that?
But
what
we
do
care
about
is
that
there's
a
bunch
of
built-in
macros,
for
example,
it
has
a
casting
function
that
can
take
some
string.
C
Aka
unicode
code
points
turn
it
into
bytes
and
then,
furthermore
calculate
its
size
by
definition
in
the
cell
validation
spec.
It
assumes
that
we
are
encoding
it
into
utf-8.
So
this
is
the
type
of
cell.
C
You
know
expression
that
we
could
write.
That
gets
a
little
bit
closer
to
our
original
thought
process
of
trying
to
limit
by
bytes,
but
the
probably
biggest
downside
is
that
this
feature
is
currently
in
alpha
as
of
123.,
so
it
just
straight
up,
doesn't
work
before
123.
and
then
for
people
to
therefore
use
like
this
validation
and
install
the
crd
in
123.
They
would
need
to
turn
on
the
feature
flag.
They
need
to
make
sure
their
feature.
C
Flag
is
turned
on
for
this
feature,
and
then
I
don't
exactly
know
if
it's
planned
to
go
beta
in
124
or
not.
In
any
case,
I
think
beta
is
not
future
flags
on
by
default
anymore.
So
there's
just
some
like
window
of
time.
That
is
one
two
n
cycles
for
before
this
becomes
a
default
feature
and
just
historically
we've
been
a
little
worried
about
making
the
crd
harder
to
install,
because
that
then
makes
basically
mcs
have
less
support
ability
for
older
versions
of
kubernetes.
C
So
I
want
to
pause
here
and
take
opinions
about
this
and
also
answer
any
questions.
If
I
didn't
explain
anything
clearly,.
A
Yeah
I
raised
a
hand
on
this.
I
think,
oh
sorry
didn't
see.
No
no.
This
is
this
is
when
I
wanted
to
talk
so
so.
Thank
you.
As
long
as
it's
a
string
field,
validation
based
on
the
encoding
length,
seems
like
a
pretty
poor
ux.
In
my
mind,
I
kind
of
like
the
current
validation,
because
you
know
you
what
you
see
is
what
you
get
it.
You
know
it's
number
of
characters.
That's
that's
something!
A
That's
really
easy
to
reason
about
when,
when
it's,
when
it's
based
on
bytes
and
the
encoding
changes
the
max
length,
and
then
you
have
to
reason
about
it.
That
just
seems
a
little
bit
rough.
C
C
I
have
some
notes
that
I
kind
of
forget
the
details
about
that,
and
it's
kind
of
hard,
at
least
for
me
right
now,
to
interpret
how
much
a
larger
value
impacts
that,
since
I
don't
know
exactly
how
the
kubernetes
api
writes
down
in
fcd
like
individual
resources
like,
are
they
split
up
into
different
values,
or
is
it
like
one
big
blob
that
type
of
thing
so,
but
that
I
think,
is
fundamentally
where,
like
the
problem
is,
would
manifest
in
terms
of
like
causing
an
actual
like
where
the
limit
could
cause
an
actual
problem.
C
A
C
No
wrong
and
then
that
fundamentally
is
tied
to
bytes
and
that's
the
only
reason
why
I
think
we're
like
even
playing
this
game.
A
And
that
makes
sense.
I
just
think
that,
given
where
we're
at
what
I,
what
I
think
we
should
do
is
look
at
that
limit.
If
it's
significantly
higher
than
the
then
the
512
kilobytes
that
that
this
would
be,
then
I
think
that
we
don't
have
much
of
an
issue,
but
maybe
maybe
the
right
way
to
look
at
this
is
just
figure
out
what
the
limit
is
figure
out.
A
What
overhead
we
need
with
the
with
the
cr,
and
then
you
know,
divide
by
four
and
and
call
that
the
right,
the
max
unicode
length,
but
I
think
expressing
the
constraint
in
unicode
is
probably
the
way
to
go
and
we
should
just
work
back
from
the
limit.
If
I
remember
correctly,
the
128
was
arbitrary.
Anyway,
it
was
so
if
it
needs
to
be
another
arbitrary
number,
because
that's
too
big,
it's
probably
fine,
but
I
believe
that
we're
nowhere
close
to
the
yeah.
C
C
Go
ahead,
yeah,
just
I
guess
this
whole
conversation
is
also
just
trending,
like
cell
validation.
Therefore,
is
one
a
downside,
because
it
gives
us
less
version
compatibility,
but
then
not
even
valuable
if
we
are
now
like
basing
it
on
the
assumption
that
making
the
like
communicating
the
limit
and
code
points
is
the
better
better
way.
Anyways
yeah.
C
Okay,
anybody
else
feeling
strong.
C
C
C
Not
on
mute,
that's!
Okay!
Thank
you.
Sorry.
I
couldn't
find
my
my
buttons
okay
yeah
so
in
the
cap-
and
we
previously
discussed
in
this
forum
that,
like
a
nice
default
for
click
for
id
case,
io
could
be
the
uuid
for
the
cube
system
name
space,
and
this
has
been
talked
about
in
other
forms
too,
like
before
sigmc,
and
we
want
to
kind
of
make
this
a
optional
thing
to
occur.
Using
this
controller
implementation,
we
have
some
other
ideas
of
things
that
could
be
cool
to
put
in
this
api.
C
That
might
be
useful
for
folks.
Here's
a
couple
ideas.
This
also
might
be
in
particular,
something
that
would
be
good
to
talk
to
sid
cluster
life
cycle
about
they
have
a
ton
of
their
own
things
that
they
need
to
keep
track
of.
That
they've
like
put
into
different,
like
annotations,
all
over
the
place,
and
they
might
like
to
consolidate
some
of
them,
or
at
least
advise
us
on
what
some
of
these
interesting
properties
are.
C
But
I
would
like
to
open
the
floor
to
just
trying
to
get
a
better
sense
on
what
the
overarching
direction
for
the
about
api
controller
implementation.
In
the
kubernetes
repo
should
be:
what
do
we
have
any
sense
like
what
actually
belongs
in
this
like
open
source
controller
implementation,
as
opposed
to
like,
should,
like
people
be
separately
implementing,
like
whatever
properties?
C
If
we
do
decide
that
some
one
of
these
or
all
of
these
things
should
be
well-known
properties?
Does
that
give
them
any
reason
to
be
like
you
know,
handled
somehow
in
the
upstream
controller
implementation,
and
then
also?
How
do
we
wanna
on
make
a
process
to
onboard
new,
well-known
properties?
C
Is
it
just
gonna
go
through
new
caps
or
stuff
like
that?
So
there's
a
kind
of
like
a
swirly
level
of
questions
I
have
just
of
just
about
like
at
this
point
like
the
about
api
is
like
sufficient
for
the
immediate
use
cases
we
have
for
the
mcs
api.
It
doesn't
need
anything
extra
added
to
it,
but
there
are
these
other
things
we
could
be
working
on
and
do
we
want
to
and
like
is,
do
we
have
any
goal
posts
of
like
kind
of
where
the
line
is?
A
I
think
it's
probably
worth
shopping
around
with
other
cigs
like
if,
if
we're,
if
there's
any
other
properties
that
we're
missing
this
list
seems
really
good,
but
it's
also.
Actually,
all
implementation
dependent,
like
every
platform,
has
their
own
definition.
C
A
And
and
then
the
question
is
if
it
is
standardized,
because
I
I
agree
with
that,
should
it
be
clustered
property
that
standardize
it
or
cluster
api.
C
A
Right,
like
maybe
cluster
api,
becomes
a
depends
on
cluster
property
and
just
sets
this
yeah.
C
Which,
which
was
a
conversation
that
we
had
with
say,
cluster
life
cycle
like
a
long
time
ago,
when
we
were
first
proposing
this,
but
there's
no,
like
you
know
we
haven't
officially
been
like
this
is
a
dependency
for
you
or
anything.
B
A
And
then,
on
the
other
hand,
like
even
even
service
cider,
which
I
believe
every
kubernetes
platform
has
today
from
talking
with
sig
network
folks,
even
that
might
not
be
something
we
can
always
count
on
going
forward.
I
mean
there's
nothing
that
actually
requires
a
service
lighter
in
kubernetes.
That's
why
there's
no
property
right
now
like
there's
nothing
stopping
you
from
just
assigning
vips
at
at
random
from
from
some
from
your
own
ipam
service,
so
yeah,
it's
tough.
A
You
know,
speaking
speaking
from
from
a
gke
standpoint.
I
know
some
of
these
are
certainly
useful,
but
but
I
don't
know
that
we
can
make
the
statement
that
they
should
be
standard
across
across
platforms.
Like
you
know,
if
I'm
looking
at
a
vsphere
deployment
region
means
something
very
different,
then
it
would
on
aws
or
or
gcp.
C
Yeah,
okay,
so
I
think
what
I
did
here
was
and
like
I
already
had
the
sense
that
I
should
probably
go
talk
to
sig
cluster
lifecycle
just
to
give
them
the
update
of
like
hey,
remember,
we
talked
a
long
time
about
cluster
property.
We
made
this
thing.
Is
there
some
type
of
things
either?
You
know
implementation
dependent
or
not
that
you
might
want
to
see
in
here
and
how
should
we
work
together
or
not
to
do
that?
C
So
I
think
there's
that,
and
then
I
get
the
overarching
I'm
hearing
sort
of
like.
I
keep
it
simple,
especially
our
like
upstream
implementation.
C
For
now
at
least
the
ideas
that
we
have
currently
are
to
have
too
many
exceptions
or
are
not
as
not
necessarily
as
like
clear-cut
than
we
would
like
compared
to
cube
system,
namespace,
uid
and
then
yeah
just
separately.
I
think,
like
this
question
of
like
what
becomes
a
well-known
property
and
how
to
onboard
them.
C
I
would
like
to
at
least
I
mean
I
think,
fundamentally
in
the
very
short
term,
we
can
say
like
go
through
us
and
then
we
can
kind
of
work
out
the
process
as
time
goes
on,
especially
how
much
we
need
to
think
about
that
varies
of
like
how
much
like
this
is
useful
to
other
folks
like
say,
cluster
life
cycle,
but
I
do
think
that
there
probably
will
want
to
be
some
sort
of
like
way
to
get
like
a
case
style
or
kubernetes.io.
C
B
Oh
yes,
go
ahead,
yeah
hi,
hi
laura.
I
think
I
kind
of
agree
with
jeremy
was
saying,
but
I
I
do
see
a
value
of
having
you
know.
Cider
ranges
as
cluster
property
like
service
pod
and
node
cider.
B
That
might
be
useful
for
life
cycle
as
well,
or
you
know
just
pulling
data
from
each
cluster.
What
so
that
the
multi-cluster?
You
know
don't
conflict
on
cider,
sometimes
like,
for
example,
you're
building
clusters
on-prem,
and
you
definitely
want
to
see
that
you're
not
using
a
particular
cider
for
another
cluster,
so
that
might
be
useful
over
there.
A
B
I
I
think
service,
part
and
node
are
generic
enough
ciders
that
they
can
be
generic,
but
cluster
project
vpc
and
region
could
be
vendor
dependent.
You
know
how
you
annotate
a
region
and
project
it
might
be
different
between
gcp
and
aws
and
azure,
so
and
so
forth.
A
I
agree:
I've
heard
different
feedback,
though
from
sig
network.
So
it's
probably
worth
talking.
C
C
All
right
cool,
so
we
can
kind
of
take
action
to
talk
to
sign
network
if
we
want
to
work
on
that
at
any
point.
C
Okay,
cool:
let's
see,
I
think
that
is
all
I've
got
I'll
briefly
slide
back
here
to
this
graduation
criteria
slide
just
in
case
there's
any
general
comments
about
these
graduation
criterion.
If
we're
missing
anything
or
if
I'm
totally
misrepresenting
the
status,
I
think
we
just
kind
of
resolved
this
one.
A
On
that,
on
the
first
point.
C
A
You
know,
I
believe
the
constraint
is
coming
from
mcs
mcs
wants
it
to
be
a
label.
I
can
see
why
someone
might
want
it
to
be
a
subdomain,
because
often
clusters
are
assigned
subdomains,
I'm
not
really
sure.
C
Yeah,
I
think,
there's
that
and
they're.
Definitely
we
super
went
to
sig
cluster
lifecycle
and
asked
this
question
and
I
feel
like
they
said
label,
but
I
don't
remember
exactly
so.
That's
like
another
point,
but
again
that's
totally
dependent
on
if
they
even
want
to
ever
use
this,
I
guess
because
back
then
we
were
talking
about
putting
in
like
kk
and
lots
of
things
being
like
basically
enforced
consumers
of
this,
and
now
it's
a
lot
more
opt-in.
C
So
that's
not
as
strong
so
yeah.
Let
me
dig
that
up
and
get
get
the
details
for
you
and
then
we
can
just
discuss
in
this
group.
A
C
Yeah,
okay,
cool
that
is
my
show,
did
anything
else
get
added
to
the
agenda.
I
don't
think
so
so.
C
A
Awesome,
I
think
that
is
that
is
it
for
this
week,
unless
anybody
has
anything
else,
they
want
to
bring
up.
A
All
right
well,
thank
you
all,
and
thanks
a
lot
laura.
It
was
great
exciting
to
see
this
movie.
Thank.