►
From YouTube: Kubernetes SIG Multicluster 2020 July 7
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Okay,
can
people
see
that
yep
cool
okay?
So
these
are
our
results,
so
we
had
57
responses
to
the
survey.
The
survey
was
only
shared
out
on
the
sig
MC
list
and
I
actually
find
the
aggregate
scores
to
be
a
little
bit
more
useful,
which
is
right
here.
So
supercluster
had
definitely
had
our
highest
aggregate
score,
mostly
because
it
had
the
fewest
people
who
hated
it.
B
Yep
so
because
you
know,
if
we
go
down
to
so
that,
like
I
said
for
the
top
ones,
we
have
super
cluster
cluster
group
cluster
set
and
multi
cluster
the,
and
then
we
ask
this
sort
of
a
second
thing.
By
the
way
we
did
provide
people
the
opportunity
to
cite
names
that
they
thought
should
be
blocked
because
they
had
problems
most
of
the
people
showed
by
the
responses
they
didn't
really
under
stillness
I'm.
B
B
So
it's
actually
interesting
because
for
top
preference
you
know,
for
we
asked
everybody
which
was
their
top
choice,
we're
Thai,
oh
we're!
Actually,
now
we've
gotten
a
few
more
responses,
and
now
we're
tied
between
super
cluster
and
cluster
group
before
close
to
group
actually
was
number
one
and
but
was
not
number
one
in
the
aggregate
score,
because
a
bunch
of
people
hated
it
so
now
we're
actually
tied.
So
that
would
actually
be
an
argument
of
if
we
are
done
with
the
survey.
B
B
C
D
E
E
C
E
A
To
I
wanted
to
just
take
a
step
back
here,
really
quick
and
say
that
I
think
so.
I.
Don't
think
that
we
should
think
of
this
as
something
that
we
have
to
commit
to
right
away.
I,
look
at
our
determination
of
a
name
to
be
like,
let's
find
maybe
a
next
temporary
name
that
comes
from
community
input
and
that
I
think
what
solidifies
the
choice
of
the
name
and
makes
it
permanent
is
when
we
have
any
API
that
references,
the
name
that
moves
to
beta
do
people
agree
with
that
sure
alphas
we
can.
C
D
C
Fact,
I
just
noticed
this.
If
you
dig
something
dot
local
address,
it
says
you
are
experimenting
with
what
happens
when
you
put
multicast
DNS
on
the
wire
so
like
we're
clearly
doing
the
wrong
thing.
I
would
like
to
have
the
discussion
with
Signet
about
like
let's
pick
a
suffix,
that
we
can
all
agree
on
as
the
next
one,
whether
that's
X,
X
or
something
else,
and
to
do
like
this.
It's
a
separate
conversation
I.
A
One
reason
that
I
don't
want
to
that
I
want
to
make
sure
that
we
think
of
like
this
is
a
little
bit
open-ended
is
because
and
and
we'll
talk
more
about
this
later
on.
Is
that
it
like
later
on
in
this
meeting.
We
have
another
agenda
item,
for
it
is
that
we've
like
basically
deferred
the
problem
of
like
cluster
registration
type
concerns
and
there's
there's
a
touch
point
with
whatever
name
that
we
have
with
a
future
registration
type
concept.
C
C
B
C
A
E
I
think
I
think
you
know
just
just
in
interest
of
fairness
like
last
time.
We
should
go
through
them.
Real,
quick
Tim
to
answer
your
question:
were
there
any
good
ones?
There
were
some
okay
ones.
You
know,
let's,
let's
quickly
decide
as
a
group,
maybe
I
guess
that
was
my
nicer
saying:
I,
don't
really
think
there's
any
strong
contenders,
but
well
since
anyone
disagrees
okay,
so
starting
here
I
think
commingle
was
the
last
one
we
got
to
yeah
so
contingent.
C
E
C
C
E
B
D
F
B
C
B
C
C
D
F
C
E
B
H
B
C
E
E
We
want
to
keep:
where
do
we
start
contingent.
A
E
B
A
C
C
E
C
A
C
B
A
C
Would
be
that
we
buy
a
TLD,
we
all
pull
our
money
by
ourselves,
dot,
dot,
Multi
cluster
and
we
don't
need
a.
We
don't
need
a
domain.
We
just
buy
a
TV,
that's
not
a
bad
idea.
Yeah
I
think
I,
don't
like
only
cost
like
400
grand
to
buy
a
TLD,
and
then
you
have
to
have
make
that
runs
across
the
we're.
Never
mind
we're
not
doing
this
so.
A
E
E
A
B
B
C
A
So
the
next
item
on
the
agenda
I
put
on
there
it's
a
follow-up
to
the
discussion
on
next
steps
for
cluster
registry
and
I
know
that
Klaus
and
rainbow
mango
had
some
had
some
thoughts
they
wanted
to
share
about
this.
Our
previous
kind
of
feeling
was
that
we
that
that
we
had
talked
about
a
few
sig
meetings
ago,
was
that
we
didn't
really
want
to
be
bound
by
the
current
implementation
or
current
API
in
cluster
registry,
but
we
felt
the
concept
was
useful
and
one
that
we
would
be
getting
too.
A
D
We
we
are,
we
are
building
a
cluster
manager,
manger
surveys
and
we,
which
will
rely
on
a
cluster
of
judge,
Freeh,
API
and
so
way
way
to
nowand.
This
across
the
registry,
be
abandoned.
In
addition
way,
we'd
like
to
enhance
a
enhance
the
API,
such
as
added
added
across
the
capacity
like
CPU
memory
to
the
API.
H
A
My
own
personal
opinion
is
that
I
I
think
that
the
registration
concept
is
very
valuable
I'm,
not
at
all
like
down
on
the
concept
of
a
cluster
registry,
but
I
I
do
not
see
a
reason
that
we
should
be
bound
by
anything
in
the
current
one,
because
there
is
so
very
little
there
and
that
there
is
not
a
great
articulation
of
what
it
means
to
be
in
that
registry
and
how
you
might,
for
example,
rainbow
mango.
You
talked
about
extending
the
the
API
with
capacity,
for
example,
I.
Don't
think
that
there
is
like.
A
Currently,
there
is
no
concept
of
any
kind
of
software
that
runs
in
the
cluster
that
you
register,
for
example
to
that
that
would
be
able
to
write
that
capacity
and
so
I
think
that's
a
valid
use
case,
but
I
think
just
to
be
really
clear.
For
me,
that
question
is:
is
that
something
that
we
must
base
on
the
existing
cluster
registry
project
or
can?
E
A
Here's
another
thing
to
just
throw
out
there.
A
different
concern
is
that
there's
a
desire
I
think
to
if
possible,
shrink
what's
in
the
kubernetes
org,
if
we
were
starting
completely
fresh
and
the
cluster
registry
repository
didn't
exist,
I
feel
like
we
would
start
it
in
kubernetes
SIG's
and
that's
another
kind
of
reason.
In
my
own
thought
process
to.
A
H
A
I
am
not
aware
of
any
current
piece
of
software
that
actually
uses
it
and
I
think
one
of
the
reasons
for
that
is
that
you
can't
actually
do
anything
with
the
cluster
registry.
Currently,
there's
no
controllers
at
bakit,
it's
just
an
API,
so
there's
not
really
anything
that
you
can
use
it
to
do,
which
is
another
reason
I'd
rather
sort
of
like
start
with
a
clean
piece
of
paper
and
talk
about
what
would
we
wanted
to
do
so?
A
D
F
A
F
E
I
mean,
from
my
perspective,
it
seems
like
also
Klaus
rainbow.
We've
got
a
couple
use
cases
for
an
API
with
multi
class
of
services.
I
think
we
have
another
one,
like
controllers
are
going
to
need
some
kind
of
registry
and
when
the
API
spec
today
we've
kind
of
had
way
by
the
way
right,
we've
we've
said
you
know
an
implementation
can
provide
one,
but
you
know
I
think
a
standard
common
API
that
that
various
controllers
and
tools
can
count
on
makes
sense.
E
Maybe
maybe
we
should
you
know
if,
if
there's
nothing,
particularly
about
the
current
cluster
registry,
that
that
is
important,
it's
really
about
having
something
common
and
community
maintained.
Maybe
we
can
all
start.
You
know
a
doc
to
kind
of
brainstorm,
the
various
requirements
and
things
we
want
to
get
out
of
it.
H
F
A
One
thing
that
I
learned
a
lot
about
in
my
cube
experience
is
not
to
not
to
tell
yourself
that
you
have
to
make
a
decision
when
you,
when
you
don't
have
to
so
I
I.
Don't
think
that
this
meeting,
like
the
fate
of
cluster
registrated
as
it
exists
today,
hangs
in
the
balance
I'd
like
to
so
one
of
the
things
that
we
we
should
do
is
we
should
and
I
accept
responsibility
for
not
having
done
this
before
I.
A
Think
that
we
should
solicit
opinions
on
the
mailing
list,
so
that
folks
that
are
not
present
in
the
meeting
today
can
share
their
thoughts
about
it.
But
I
think
that
we
can
also
begin
to
collect
use
cases
for
what
we
think
a
kubernetes
community
level.
Cluster
registry
concept
should
be
be
able
to
facilitate
or
make
possible
that
way.
We,
you
know
we're,
not
gonna,
go
and
like
delete
anything
today,
but
I.
A
Don't
think
that
we
should
delete
something
without
more
than
just
a
vague
verbalized
notion
in
a
cig
meeting
that
we're
going
to
replace
it
eventually
with
something
so
either
way.
I
would
like
I
think.
Maybe
a
sensible
way
to
proceed
would
be
to
start
collecting,
use
cases
and
see
if
there's
any
other
opinions
present
by
mailing
the
list.
Does
that
sound,
like
maybe
a
sensible
way
to
proceed.
H
H
A
A
F
A
F
So
yeah,
basically
people
do
the
feedbags
about
two
cats
that
they
opened
one
willing
to
status,
update
on
federated
resources.
We're
mainly
intention
is
to
kind
of
increase
or
improve
the
type
of
conditions
that
we
have
for
all
the
type
of
spatters
values
that
we
have
one
with
it,
which
is
that
you
know
we.
D
F
Propagation
and
we
have,
by
publication
time
out
its
creation
time
and
stuff.
So
then
the
main
danger
is
to
follow
another
approach
that
has
been
certainly
there
in
the
cap,
where
you
can
stand
the
condition
and
define
tags
like
ready
or
show.
That
will
be
very
useful
when
you
are
having
one
year
for
the
ancient
sources,
such
as
deployments
right.
But
you
really
want
to
know
this
ready
or
no
or
no,
yes,
simply
if
it
has
been
created
for
you
failed
at
wage.
F
That
was
one
that
I
wanted
to
share,
but
please
go
to
the
camp
and
we
can
keep
the
discussion
of
that,
but
they
say
the
most
relevant
and
one
was
being
awake
for
long
before
it
was
about
the
full
reconciliation
cuvette.
If
and
I
know
that
he
says
some
some
thoughts
we
kept.
But
again,
please
jump
in,
and
you
know
start
to
make
questions
and
doubts
and
yeah.
Initially.
The
intention
is
to
well
considering
probably
to
to
keep
a
feature
gate
where
you
can
choose,
probably
between
inclusion
for
reconciliation.
F
It
was
not
dependent
has
such
in
the
cap,
but
but
yeah.
We
we
have
in
mind
that
before
so
probably
we
don't
really
want
to
kind
of
everything
has
been
done
until
now.
So
having
a
future
game
where
you
can
have
or
choose
well
who's
the
consolation
mobile
model
that
will
probably
be
a
really
good
approach
and
move
up
to
today's
over
is
polar
consolation.
F
F
Of
watching
former,
taking
a
look
at
the
control
plane
to
know
which
are
the
new
sources
and
then
so
applied
the
reconciliation
in
the
plastics.
Then,
with
this
more
or
less
Disney's
functionality,
will
the
fact
that
this
pool
model
and
the
other
approach
we
shared
in
the
in
the
cap?
However
yeah?
Maybe
we
should
probably
follow
the
tentative
approach
was
to
follow
exactly
the
same
again
approach
a
model
that
the
cue
the
chocolate.
F
F
F
F
Well,
usually
the
same
or
the
current
model,
where
we
are
currently
using
the
kind
of
go
to
don't
do
the
remote
prostitute
to
know
in
the
cluster
AP
is
alright,
otherwise,
to
fetch
the
resources
by
using
the
kind
of
go
instead
of
having
the
samples.
So
the
idea
of
endpoints
again
was
mainly
to
be
able
to
suppose
additional
functionalities
that
we
will
do
with
the
faster
rather
than
replacing
the
current
standards
and
faster
health
checks.
How
how
it
works
in
the
booze
reconciliation,
so
yeah.
F
A
As
I
kind
of
referenced
before
I
think
whole
reconciliation
use
cases
in
queue
fed
are,
are
probably
rife
with
use
cases
that
may
be
more
Universal
than
that
and
may
just
apply
to
cluster
registry,
so
I've
loved
once
once,
we've
got
like
a
doc
started
hectare,
I'd
love
for
you
to
kind
of
track
that
and
see.
If
there's
anything
that
really
rings.
The
bell
is
like
that
resonates
with
commonality
between
those
cluster
register,
use
cases
and
ones
that
fold
up
into
the
caps
that
you
mentioned
when
I
took
a
look
at
your
full
reconciliation
cap.
A
If
you
have
a
controller,
that's
continually
updating
the
status
of
the
federated
resources
in
the
Federation
API
like
that.
That
might
wind
up
being
a
lot
of
Rights.
Have
you
thought
about
what
kind
of
Kwas
properties
you
would
say
that
such
an
API
had
like?
Maybe
it
gets
updated,
but
it
only
gets
updated
like
every
five
seconds,
or
something
like
that?
Have
you
thought
about
that
at
all
yeah.
F
So
they
I
don't
know
if
that
was
international,
but
each
of
the
coalition
led
correctly
when
you
get
clusters
of
resources
that
they
are
not
the
same,
for
instance,
propagation
timeout,
or
something
like
that.
We
just
have
data
status
of
the
resources
where
the
propagation
saying
isn't
really
kind
of
right
down,
but
they
all
know
the
resources
status
is
ready
or
easier
right.
So
it
says:
yeah
I
have
the
same
folks
and
the
intention
is
to
simply
take
those
that
we
are
not
satisfying
the
condition
only
instead
of
having
to
write
all
the
time.
Yeah.
A
Okay,
I
think
it's
something
to
think
really
carefully
about,
because
it
will
like
the
the
QPS
needed
to
do
those
to
do
those
rights
is
gonna
vary
linearly
in
the
size
of
the
number
of
resources
that
you
use
Federation
to
deploy.
So
you
don't
I,
think
you
want
to
avoid
like
a
thundering
herd
type
effect,
and
so
it's
worth
thinking
very
carefully.