►
From YouTube: Kubernetes SIG Multicluster 2021 Jan 12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
I
have
lots
of
faves,
but
right
now
I'm
doing
a
lot
of
jasmine,
green
tea
and
overall,
I
think
I
like
green
tea.
The
most
I
really
like,
like
sencha,
like
the
really
grassy
green
teas.
B
A
A
Jeremy
rickard
is
super
into
t.
If
you
know
who
that
is,
he
does
a
lot.
A
A
Got
a
new
look
but
the
same
taste
yup
all
right!
Why
don't
we
get
this
party
started,
so
I'm
gonna
try
to
remember
the
little
intro
spiel,
and
here
we
go
it's
tuesday
january
12
2021,
and
this
is
kubernetes
cluster
laura.
I
think
you're
on
the
agenda.
First,
cluster
id
take.
B
A
It
means
I
gotta,
go
click
the
button
yeah
I
should
have
tested
earlier.
Sorry,
that's
okay!
You
know,
I
don't
want
to
leave
that
enabled
by
default
right,
but
I
always
forget
you
should
be
good
now.
Okay,.
B
Here
we
go
all
right.
Welcome
to
my
desktop.
I
want
to
give
an
update
about
cluster
id.
This
is
in
a
similar
format
of
last
time,
but
new
questions.
The
slides
are
also
linked
in
in
the
meeting
agenda.
Last.
A
B
A
B
Jokes
to
land
this
time.
Thank
you.
Yes.
Last
time
there
was
a
an
issue
so
glad
we
could
work
that
out.
Okay,
so
the
update
I
want
to
give
for
today
similar
to
last
time
and
I'm
going
to
briefly
reintroduce
what
I'm
even
talking
about
for
people
who
haven't
been
around
there's
three
outstanding
questions
by
my
count,
so
I'm
going
to
bring
those
up
and
what
I
think
is
going
on
there
and
then
hopefully
we
can
make
a
call
on
provisional
status
for
people
who
haven't
been
around.
B
This
we've
been
thinking
about
merging
this
cap
as
provisional
for
since
the
end
of
last
year,
and
I
think
we're
pretty
close
now
so
just
want
to
touch
base
on
that
again
and
basically,
this
whole
project
that
we're
working
on
is
to
produce
a
cap
that
describes
a
standard
for
how
clusters
should
refer
to
each
other
when
they're,
when
they're
in
the
same
multi-cluster
environment,
basically
how
what
names
they
should
call
each
other
and
where
those
names
are
stored.
B
In
particular,
we
want
to
make
it
useful,
strict
enough
to
unblock
some
known
use
cases,
there's
some
known
stuff,
given
the
continuing
implementations
of
multi-cluster
api
that
we
can
think
of
that
we
we
need
a
cluster
id
for,
in
particular,
being
able
to
identify
clusters
in
a
multi-cluster
setup
in
their
logs
disambiguate
pods
in
a
multi-cluster
headless
service,
maybe
with
cluster-aware
dns
names,
so
that
they're
nice
to
write
as
humans
and
to
track
when
new
clusters
have
joined
the
environment
like
rtl
cluster
queries
our
orange
cluster
here,
okay
next
slide.
B
So
this
is
what
I
think
is
outstanding.
We've
been
working
on
the
cab
for
a
couple
weeks
since
last
year,
and
this
is
what
I
think
we
need
to
talk
about.
So
the
main
outstanding
things
should
a
id
case.
I
o
cluster
claim
so
the
name
itself
be
strictly
a
valid
dns
label
or
a
sub
domain
is
one
that
we
listed
as
needed
to
discuss
more
broadly.
Do
we
need
to
change
anything
about
the
goals?
B
Non-Goal
sections,
especially
since
it
sounds
we'll
soon
soon
be
advertising
this
more
broadly
as
into
like
more
groups,
and
maybe
we
want
to
sort
of
set
the
tone
more
specifically
because
that
kind
of
came
up
in
a
recent
comment
and
then
just
give
an
update
about
the
naming
brainstorm
spreadsheet.
B
Okay.
So
this
is
the
first
one,
so
I'm
gonna
just
give
what
I
think
is
going
on
and
then
ask
for
opinions
and
feelings,
but
basically
there's
a
question:
if
this
requirement
for
a
cluster
id
that
it
must
be
a
valid
rfc,
1123
dns
label,
there's
some
conversation
of
people
might
want
to
use
a
subdomain
here
and
then
there's
another
comment
that
maybe
that
would
be
a
breaking
change.
So
we
should
talk
about
it
sooner.
B
I
just
want
to
confirm
that
the
what
we're
talking
about
here
is
the
difference
of
someone
having
an
identifier.
That's
a
single
label
like
cluster,
a
versus
some
compound
one
like
this
and
then
assuming
that
I
have
that
right.
What
do
we
think
about
this
potentially
being
breaking?
And
if
we
have
use
cases
for
it.
A
So
that
is
an
accurate
interpretation
of
excellent
that
matches
my
own.
I
am
happy
for
the
sake
of
reducing
friction
to
start
with
the
label
for
for
context,
like
my
comments
are
motivated
by
having
added
like
things
like
environment,
from
secret
keys
to
the
api
and
secret
keys
were
initially
super
restrictive,
they
were
only
valid
dns
labels
and
people
felt
that
was
too
restrictive,
so
that
was
sort
of
like
the
experience
coloring.
A
E
Plus
one
I,
the
main
concern
with
moving
it
to
a
subdomain
is
knowing
the
contexts
in
which
it
will
be
used
right.
So
for
comparison,
we
made
service
names
and
namespaces,
be
single
dns
labels
so
that
we
could
build
knowably,
valid
dns
names
from
them
well,
mostly
knowable,
and
that
we
could
use
them
in
things
like
search
paths
without
having
to
guess
how
many
dots
were
needed
to
be
interpreted.
E
Now,
it
turns
out
that
there
are
some
use
cases
that
people
need
more
dots
in
their
service
names
and
we
we
can't
really
give
it
to
them
because
to
to
expand
on
that
would
be
a
breaking
change.
It
would
potentially
destroy
clients
who
are
making
assumptions
about
it.
So
if
we
feel
that
this
is
important
eventually,
like
I'm
totally
happy,
let's
go
alpha
with
the
simplest
thing
possible.
D
Right,
I'm
I
I'd
agree
there
that
starting
with
the
label
for
now
seems
to
be
the
most
make
the
most
sense,
but
I
think
also
you
know
this
is
this
kind
of
feeds
into
a
conversation?
That's
come
up
a
whole
bunch
around
you
know.
Maybe
this
is
maybe,
if
you
do
want
a
subdomain,
that's
a
use
case
for
aliases
and
laura
to
one
of
your
other
points.
Maybe
we
want
to
add
that
to
the
list
of
non-goals
right
now,
because
I'm
sure
that
will
keep
coming
up.
D
You
know
we're
not
trying
to
solve
like
we.
We
recognize
that
there
is
probably
a
use
case
for
aliases
and
that's
not
necessarily
something
we
need
to
solve
right
now.
A
B
E
B
B
E
B
Okay
cool,
so
that
answers
this
question.
For
me,
next
point
was:
do
we
need
to
change
the
goals
non-goal
section,
especially
since
it
sounds
like
we
will
soon
be
advertising
this
more
broadly.
So,
on
the
actual
pr,
I
felt
the
main
criticism
was
to
move
the
user
stories
up
into
the
goals
explicitly,
which
we
can
definitely
do
to
make
sure
that
they're,
more
upfront
and
paul
suggested
some
pros
for
that
which
I
might
rip.
B
B
There's
a
non-goal
that
the
non-goal
is
to
solve
any
problems
without
specific,
tangible
use
cases,
though
we
will
leave
room
for
extension,
so
this
is
kind
of
like
a
goal
in
the
non-goal
and
I'm
wondering
if
there's
more,
we
want
to
say
about
this
and
how
we
think
that's
going
to
overlap
when
we
take
this
to
people
who
are
outside
signal
to
cluster.
D
Yeah,
I
I
like
it.
I
would
just
I
would
mention
aliasing,
especially
because
in
the
as
an
on
goal,
especially
because
in
the
first
pass
you
know
over
a
year
ago,
I
had
an
attempt
to
introduce
a
cluster
id
that
that
was
a
huge
portion
of
the
conversation.
B
And
then
I
can
move
the
user
stories
up
and
then
there
is
there
anything
else
that
we
feel
we
need
to
impress
upon
people
outside
of
sig
multicluster
in
this
section
or
in
any
other
overview
section
or
if
people
have
time
to
give
it
kind
of
another.
Look
from
the
perspective
of
I'm
from
the
outside
reading
this.
For
the
first
time,
then,
your
thoughts
would
be
appreciated.
E
B
Sounds
good
and
yeah
not
too
many
new
ideas
on
the
naming
brainstorm
update.
There
is
a,
but
it's
only
been
a
week
a
long
week
too.
We
do
have
the
shiny
one
cluster
value
so
exciting,
but
in
general
it's
kind
of
these
names
should
maybe
for
next
steps.
For
this,
I'm
wondering
the
opinion.
If,
if
we're
going
to
merge
this
to
provisional,
then
share
it
with
more
people.
B
B
B
Looking
at
it,
I'm
still
learning
all
of
the
sigs,
so
I
could
be
making
them
up.
E
Yeah,
I
think
cluster
api
sync
cluster
lifecycle
is
the
the
best
potential
other
consumer
of
this
and
would
be
a
strong
voice
if
we
can
get
them
to
endorse
this
this
proposal,
so
I
do
think
going
out
to
them
earnestly
and
soon
and
letting
them
have
material
impact
on
the
design
if
they
feel
they
need.
It
is
useful,
and
that
obviously
includes
naming.
B
E
Okay,
you
want
to
find
out
when
their
sig
call
is
see
if
we
can
get
on
their
agenda
and
let's
coordinate
here.
Whoever
from
this
group
wants
to
appear
over
there,
we
don't
want
to
overwhelm
them,
but
let's
show
up
with
the
appropriate
voices
and
answer
questions
cool.
B
All
right,
so
I
will
include
them
in
the
naming
brainstorm
as
directly
related
to
this
and
then
just
want
to
take
the
slot.
I
know
that
I
think
tim
and
paul
both
mentioned
in
recent
github
comments
like
we
should
merge
the
supervisional
soon
I
tried
to
go
and
like
resolve
all
the
comments
that
were,
I
think,
more
or
less
done.
B
Sorry,
if
I
was
stepping
on
anybody's
toes
who
wanted
to
resolve
their
own
comment,
but
I
just
wanted
to
make
it
easier
to
read,
and
I
think
I
guess
I
just
want
to
know
when
we
want
to
do
that-
what
I
should
do,
besides
resolving
all
the
comments
to
make
that
clear
when
it's
time
and
then
in
particular,
I'm
wondering
if
we
want
to
open
the
next
stage.
Pr
for
where
comments
come
from
other
sigs,
like
sig
cluster
lifecycle,
or
we
want
to
do
that
on
this
pull
request.
D
I
think
we
probably
want
to
open
another
for
that,
like
it
seems
like
the
the
comments
that
have
been
coming
in
suggest
that
everybody
kind
of
thinks,
at
least
here,
that
this
is
generally
a
good
idea,
there's
probably
some
things
that
we
still
need
to
sort
out,
but
you
know:
they've
been
smaller
changes
and
like
moving
things
around,
not
actual
big
conceptual
changes.
D
You
know
it's
probably
okay,
to
merge,
provisional
and
maybe
start
another
pr
to
gather
feedback
from
you
know
as
we
share
it
more
broadly.
Does
that
make
sense.
B
Okay,
cool,
then
I'll,
go
resolve
all
the
last
tiddly
bits,
including
related
to
this
goals
and
goals
section
and
the
subdomain
part.
The
label
subdomains
cluster
names,
part
and
then
I'll,
just
maybe
ping
paul
for
the
next
step.
A
Excellent
all
right,
chi
jin,
I
think
you're
next
on
the
agenda
with
work,
api,
yeah,
okay,.
F
F
Okay
yeah,
so
it
has
been
a
long
time
since
last
time
I
talk
about.
We
talked
about
this
design
dock
and
I
did
some
updates
based
on
the
comments
from
different
perspective.
F
So
the
first
thing
that
we
I
did
is
I
added
a
motivation
part
here
to
introduce
to
discuss
about
the
current
technologies,
to
apply
the
resources
onto
the
multiple
clusters
and,
what's
the
common
patterns
that
we
have
today
and
the
motivation
for
this
api.
So
I
list
some
things:
the
the
existing
hot
technologies
like
the
coop
fight,
v1
and
v2
and
githubs,
and
try
to
find
the
common
abstractions
for
the
for
these
technologies,
such
as
they
all
have
a
single
source
of
truth
that
that
could
be
a
gate.
F
Repo
cloud
storage
could
be
several
rpc
servers
and
there
will
be
a
control
loop
that
try
to
fetch
the
resources
from
the
that
sorts
of
tools
and
to
apply
the
resources
onto
the
one
or
multiple
remote
clusters,
and
there
should
also
be
a
placement
way.
There
should
also
be
a
way
to
decide
which
clusters
these
resources
must
should
be
applied
to,
and
also
there's
the
in
the
various
original
blog
posts.
There
are
some
criteria.
I
think
that
is
important,
such
as
in
the
place
how
to
place
resources
into
these
clusters.
F
F
So
I
think
there
are
so
we
key
we.
We
find
some
motivations
for
this.
Why
we
need
this
kind
of
apis.
F
The
first
one
is
like
we
want
to
have
a
common
control
loop
to
to
apply
a
resource
from
a
single
from
a
source
of
truth
to
a
remote
cluster
that
so
that
the
developers
of
for
could
be
easily
to
use
this
kind
of
things,
to
integrate,
with
any
kind
of
the
source
of
tools
and
to
deploy
the
resources
onto
multiple
remote
clusters,
and
it
could
be
easily
to
integrate
with
some
placement
primitive
to
say
which
workload
should
be
placed
on
which
clusters
and
it
should
be
able
to
track
the
workloads
that
has
been
applied
to
the
clusters.
F
So
I
think
that's
the
motivation
part
I
have
added
into
this
stock
and
some
other
things
I
have
updated,
like
I
add
some
diagram
with
working
with
high
primitive
sections
to
discuss
about
whether,
if
we
have
how
how
this
working
by
could
be
work
with
a
higher
primitive.
F
F
And
another
part
is,
I
have
some
changes
on
the
it's
almost
sorry,
it's
here
yeah,
so
I
also
add
a
section
about
what
resources
should
be
considered
in
the
work.
So
in
the
stock,
I
just
classify
categorize
the
resources
into
three
types:
some,
the
workload
related
resources
that
could
be
the
deployments
the
stateful
set,
the
config
map
or
the
any
namespace
scoped
customer
resources,
and
the
second
type
is
the
class-wide
configuration
resources
such
as
api
service,
crds
storage
classes
and
also
the
credentials.
F
So
I
think
the
work
api
should
only
be
concentrates
on
concentrates
on
the
workload
related
resources
and
for
the
other
type
of
resources.
Like
the
secrets
and
the
cluster-wide
configurations,
we
should
not
use
workload,
work
apis,
but
have
other
techniques
to
to
propagate
to
the
remote
clusters.
If
we
want
to,
if
we
want
to
so,
I
think
yeah.
I
think
this
is
something
that
I've
updated
in
this
talk.
I
if
there
are
any
comments
or
any
questions,.
E
I
I
made
all
my
comments
on
the
doc
I
I'm
tempted
to
pull
up
the
the
drawing
that
we
talked
about
before
the
holidays
and
and
just
map
all
the
terms,
but
it
was
this
proposal
a
few
weeks
back
that
inspired
me
to
go.
Do
that
drawing
in
the
first
place,
so
my
opinion,
do
you
think,
go
ahead.
A
Sorry
I
was
gonna,
ask
tim:
do
you
think
maybe
this
kepp
would
be
like
the
the
pr
that
eventually
is
formed
from
for
this
cap
should
have
that
dock
or
should
have
that
diagram
in
it,
because
I
think
that
would
be
a
good
place
to
put
it.
E
Yeah,
I
still
have
on
my
to-do
list
like
to
put
that
drawing
in
a
place
that
other
people
can
actually
use
it,
the
at
the
end
of
the
day.
What
I
see
here
is
a
if
you
squint
a
more
or
less
generic
implementation
of
that
that
diagram.
But
I
don't
remember
what
I
called
the
diagram
at
this
point,
but
the
you
can.
You
could
roughly
use
this
to
carry
anything
and
provide
feedback
right.
E
So
it's
in
in
my
eyes
it's
basically
git
ops
with
a
kubernetes
api,
which
is
not
a
bad
thing.
It's
just
a
different
design
right
and
I
know
it's
supposed
to
be
about
work
and
workloads.
But
honestly,
I
don't
really
see
a
whole
lot.
That's
tying
it
to
that
and
again,
that's
not
a
negative
thing.
It's
just
an
observation.
C
So,
given
that
observation,
does
it
become
a
question?
What
is
the
benefit
of
this
over
get
ops
other
than
just
the
re-implementation
in
the
kubernetes
api
is
there?
Is
there
seen
a
key
benefit
of
doing
this
way.
G
G
I
think
the
advantage
of
this
type
of
approach
is
that
I
can
just
declare
here
the
payloads
that
I
want
to
be
distributed,
and
I
can
decouple
a
little
bit
of
where
they're
placed
when
they're
placed
how
they
react
to
failure,
and
that
may
be.
I
could
still
apply,
get
ops.
Even
to
that
first
phase
right,
I
could
have
a
get
ups
flow.
That
applies
these
work
declarations
into
a
cluster
which
is
then
helping
to
distribute
work
across
parts
of
the
other
fleet.
E
G
G
C
Deploy
the
distributed
deployment
from
cube,
fed
to
the
same
thing
right.
If
there
are
these,
these
constructs
that
say,
look
here's
a
resource
I
want
to
deploy
across
multiple
clusters.
It's
left
to
the
implementation
of
the
controller
handling
that
how
it's
distributed
across
there,
whether
that
is
prescriptive
or
resource
based.
E
A
I
think
we,
I
think,
we're
kind
of
crossing
two
streams
here.
I
view
the
work
thing
as
the
transport,
and
I
view
scheduling
as
something
that
happens
above
that
I
do
not
think
work
should
have
scheduling
in
it.
G
Agreed
like
great-
and
I
I
think
the
only
thing
I
wanted
to
try
to
get
across
is
that
separate
from
get
ops
where
get
ops
has
a
very
typically
has
a
pretty
direct
model
of
content
from
this
repo
goes
to
this
cluster.
G
E
Right
so
that
could
be
driven
by
git
or
that
could
be
driven
by
kubernetes,
api
or
or
by
any
other
thing
you
can
imagine,
and
you
can
still
get
the
feedback
some
there
are
different
trade-offs
right.
Git
has
a
nice
history,
but
it's
not
so
good
with
status.
This
is
great
with
status,
but
not
so
good
with
history,
so
there
are
different
use
cases
that
different
users
will
will
have
their
their
own
value,
propositions
on,
which
is
a
good
thing.
E
This
is
what
ecosystem
is
the
the
million
flowers
the
and
then
I
agree
completely
with
the
idea
that
having
a
live
api
server
that
you
can
load
things
like
custom
resources
into
and
build
higher
level
instructions,
building
meta
scheduling.
On
top
of
this
potentially
interesting,
if
meta
scheduling
is
what
somebody
wants.
G
G
H
So,
regarding
multi-cluster
placement,
I
recently
had
a
nice
experience
with
admiralty
I
o
project,
so
basically
they
are
doing
the
placement
through
virtual
couplet,
and
this
is
where
this
dynamics
can
be
scheduling,
works
pretty
nicely,
and
everything
is
pretty
much
controlled
with
annotations.
So
here
we
go
like
we
can
utilize
these
github's
style
of
things.
So
maybe
guys
you
can
look
into
the
project
to
avoid
some
duplication,
at
least
in
a
dynamic
scheduling
part.
It
worked
pretty
nicely
for
me.
H
H
E
So
I
I
had
a
question
on
that
with
respect
to
the
ultimate
goals
of
this,
usually
the
at
least
the
caps
that
I
experience
are
proposals
that
are
like
hey
we're,
making
kubernetes
better
by
adding
stuff
to
the
the
core-ish
of
the
system
or
like
core
and
mantle,
as
brian
would
have
said.
This
feels
more
like
crust
or
ecosystem
than
core,
so
am
I
am
I
misinterpreting
that
like
is,
is
the
goal.
Let
me
back
up.
E
A
I
I
see
what
you're
saying
tim:
it
is
not
personally
important
to
me
to
have
a
cap
just
for
the
sake
of
doing
so,
since
I
think
you
make
a
good
point
about
enhancing
the
core.
I
do
think
that
it
is
a
good
idea
to
record
the
things
that
the
cap
captures
and
to
talk
about
graduation
criteria,
so
I
I
personally
would
be
fine
to
put
it
somewhere
else,
but.
E
A
E
So
my
question
is
really
like
we're
endorsing
effectively
a
like
an
optional
sig
related
project.
Cool,
like
that's
great,
is
as
long
as
we're.
Okay
that
we
open
that
can
of
worms
right.
I.
E
A
Don't
know,
maybe
it's
something
that
we
should
take
a
look
at
and
make
the
decision
as
a
group.
E
Sure,
yeah
and
again
I
want
to
just
reiterate,
like
this-
is
in
no
way
like
me
trying
to
be
negative
or
or
gatekeeper
or
anything.
I
simply
don't
know
what
the
done
thing
is
right
in
this
area.
A
Okay,
so
children-
maybe
you
and
I
can
take
a
look
at
like
and
and
browse
through,
the
open
caps
and
see
if
we
think
this
fits
or
if
it's
maybe
something
we
want
to
capture
with
a
similar
process,
but
does
doesn't
have
to
go
in
enhancements.
A
C
F
A
Okie
doke:
well,
if
there's
nothing
else,
you
can
get
the
rest
of
your
day
back
and
we'll
see
everybody
next
week.