►
From YouTube: Kubernetes SIG Multicluster 2020 Apr 07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
B
C
B
A
D
So
basically
we
did
like
you
were
working
mainly
with
with
tear
fed
and
we
were
trying
to
get
the
people
that
you
have
in
the
past.
With
careful
with.
Was
there
the
target
improvements
and
we
we
also
encounter
certain
scalability
issues
and
improvements
that
week
we
could
probably
add
on
integrating
two
into
the
system
solution.
D
So
yeah
I
represent
a
few
quite
short
for
slice
and
dance.
You
know.
Basically,
the
motivation
was,
as
many
of
you
requested,
to
use
a
full
day's
reconciliation
model.
We
also
aim
to
centralize
the
computation.
We
saw
that
we
have
a
big
amount
of
resources
and
clusters
there.
There
kind
of
some
some
challenges
and
response
times
and
scalability
suffered
from
it.
So
so
yeah
we've
tried
to
to
propose
a
fan,
solution
or
approach
to
be
more
decentralized,
so
that
I
will
get
the
control
plane,
oh
and
improve
mobility
and
and
then
well.
D
So
basically,
another
of
the
items
that
we
encountered
was
that,
with
the
current
model,
the
definition
of
what
relative
resources
status
means
is
not
not
only
in
whether
it
was
created
or
not,
but
probably
can
be.
Would
you
have
a
deployment?
If
you
had,
we
are
the
impending
of
crossing,
so
yeah
that
will
probably
will
allow
to
to
extend
these
and
to
be
more
customized
form
for
each
resource
and
yeah.
D
Basically,
another
of
their
situations
that
we
found
is
that
you
can
probably
create
like
a
mall
and
Cafe
cafe,
plastered,
and
then
they
you
don't
know
the
capacity
you
will
not
be
able
to
vet
it
to
resources.
So
probably
to
have
an
additional
information
about
the
classic
capacity
at
any
time
will
will
allow
us
to
fail
sooner.
D
So
yeah
basic
approach,
nothing
really
what
we
know
from
system
and
we
we
try
to
inspire
on
what
the
big
chocolate
approach
means
and
what
what
to
issue
and
how
it
works.
So
yeah
we
didn't.
What
did
you
change
when
I
stopped?
The
idea
was
basically
to
to
have
a
demon
that
will
be
running
on
each
chief,
that
cluster
and
then
the
control
plane
to
talk
to
who's
the
to
these
demos,
one
on
each
cluster,
the
main
the
main
targets
of
changes
when
we
like
that,
the
control
payment
we
make
us
now.
D
Well,
you
have
a
look
very
tightly
sources
and
this
is
quite
simple
and
it
makes
sense
to
have
a
centralized
way
and
a
native
way
of
of
the
find
resources
on
kubernetes.
So
that's
something
that
this
is
great,
the
other
world,
one
of
the
things
that
control
train.
We
changed
the
state
of
what
is
doing
right
now
when
it's
very
created
a
client
talking
to
the
to
the
target,
checking
if
all
the
resources
are
by
the
wall
and
otherwise
we're
creating
them
or
doing
certain
actions
which
was
not
to
scale
in
so
much.
D
In
this
case,
potential
claim
will
will
rely
on
the
demons
to
get
the
state
of
the
fittest,
resources,
the
cluster
state
and
the
capacity
that
will
improve
the
way
that
we
were
required
to
have
write
access
store
in
the
in
the
management
class
or
force
cluster,
where
you
need
to
have
the
key
of
conflict
to
reach,
to
reach
certain
different
classes,
and
also
you
need
to
have
write
access
to
trade
insurance.
I
mean
this.
This
approach,
even
we
long
ago,
briefs
access
to
the
management
faster.
D
So
we
it's
always,
though,
with
no,
which
violated
resources
need
to
be
created,
so
that
will
that
will
reduce
the
amount
of
data
we
consume,
and
then
the
demons
as
well
will
be
in
charge
of
watching
the
state
of
other
resources
that
they
are
deploying
in
the
nakiaford
plastic
life,
in
order
to
say
suppose
this
state
and
which
resources
are
not
being
traded.
We
thought
those
about
another
similar
approach
to
visual
children,
where
you
have
certain
hundreds
than
the
spawns
determine
operations
that
can
be
reconciliation,
state
or
the
clustered
cells.
D
So
then,
the
control
plane
just
have
to
to
reach
to
call
these
endpoints.
For
the
demos,
in
order
to
to
be
able
to
to
know
the
status
at
any
time
when
work
has,
has
it
wasn't
right
now
and
in
a
picture
with
a
quick
example
of
federated
sickness
or
you
have
the
specialists
now
statuses
now,
so,
basically,
you
have
three
plots.
The
control
plane
has
now
and
then
do.
The
controllers
are
daemon
running
on
the
different
clusters
and
yeah.
D
There
will
be
a
different
teacher
than
what
we
have
right
now
we're
putting
worth
MacArthur
is
in
charge
of
doing
other
reconciliation.
Do
we
know
the
status
checks
and
their
control
control
the
state
of
the
cluster?
So
again
we
thought
about
proposing
these.
These
changes
in
the
meeting
and
probably
going
to
more
seniors
when
we
are
captive.
If
you
find
with
this
interesting
or
find
that
this
has
potential.
A
E
D
So
then
we
will
come
to
some
of
this.
Come
ability
issues
as
well
like
we're,
also
measuring
the
channel,
and
when
you
have
five
thousand
resources
for
the
rated,
maybe
you
need
to
go
and
check
on
each
class
there
all
those
stages
on
each
classroom,
so
yeah
there's
also
current
situation
right
now.
I
won't
want
to
scale
that
much
if
there
are
a
number
of
resources
were
increase
in
the
number
of
cluster
increased.
E
So
I
think
this
is
just
like
me
personally.
This
looks
great
one
thing
I
would
like
to
point
out
not
to
like
sort
of
you
know
undermine
what
you've
done.
I
mean
be
design,
is
great
Rancher
actually
just
released.
Her
really
was
Darren
Shepherd
who
just
released
a
project
called
fleet,
and
so,
if
you
go
to
github.com,
slash,
ranch
or
slash
fleet,
you
can
see
it
and
we
actually
follow
a
very
similar
model
right.
E
So
we
do
pull
reconciliation
and
then
each
sort
of
agent
or
daemon
that
runs
within
the
local
or
within
the
cluster,
that
the
resources
are
getting
instantiated
within
push
status
back
up
to
our
control
plane,
and
so
it's
a
very
similar
model
to
what
this
is
you
know,
and
for
us
the
whole
reconciliation
is
absolutely
because
we're
trying
to
go
after
scale
I
mean
we
want
to.
We
want
to
have
million
clusters,
you
know
managed
with
a
single
control
plane
on
our
side,
but
you
know
it's
just
I.
E
D
Yeah
I
have
a
look:
I
have
a
look
at
the
current
situation,
so
yeah,
it
kind
of
yeah
I
mean
I.
I
didn't
went
deeper
into
the
code,
but
yeah
I
know
that
you're
using
customized
and
that
you
have
a
bundle
of
deployment
object
like
a
symmetric,
customized
object.
What
you
have
all
that
in
place.
If
you
want
to
deploy
on
there
on
the
the
class
I
mean
you
have
an
agent,
so
in
this
case
yeah
more
or
less
this
demon.
D
Well,
it
is
living
in
each
or
running
in
each
cluster
is
similar
in
this
sense.
But
we
really
like
the
potential
of
cuvette
and
simplicity
that
doesn't
require
to
cut
like
lead
in
place
or
does
it
require
to
use
customized
and
unknown
so
give
us
more
information
or
more
potential
how
to
quickly
increase
the
amount
of
resources
or
the
type
of
images
they
can
be
for
the
range.
A
So
we
have
quite
a
full
agenda
today,
so
I
think
we
probably
want
to
keep
moving
I
think
the
next
topic
is,
should
we
cut
a
release
this
week
for
cube
fed
in
general
I'm
a
fan
of
release,
often
especially
while
things
are
at
an
alpha
state?
So
if
there
are
things
to
release,
I
would
encourage
you
to
go
ahead
and
release
them.
A
F
Sure
so
I
have
something
to
bring
up
that
isn't.
Incredibly
concrete,
I
wrote
a
blog
post
a
little
while
ago,
linked
in
the
meeting
dog
around
an
idea
that
I've
kind
of
been
sitting
on
around
making
the
building
blocks
for
people
to
build
their
own
multi
cluster
scheduling
and
so
on,
without
needing
to
necessarily
opt-in
to
a
single
vertical
stack,
because
what
I've
seen
like
in
the
end
user,
community
time
and
time
again,
it's
people
want
it
just
their
own
way,
and
that
means
that
they
invent
almost
everything
themselves.
F
So,
in
talking
to
a
lot
of
people
in
the
community
like
talking
with
the
Kudo
kabillion
folks
how
they
got
anyone
to
use
Covidien
the
idea
coming
up
with
like
the
most
basic
building
blocks
and
letting
people
bike
shed
the
differentiating
features
on
those
has
kind
of
been
a
reoccurring
pattern.
So
the
design
that
I
kind
of
outlined
was
almost
a
met.
F
A
scheduler
approach
where
you
have
something
comparable
to
the
cabinet's
API
itself
with
cluster
registry
as
exists
now
a
workload
or
inventory
registry
with
bundles
of
more
or
less
just
yeah,
more
raw
objects
and
then
schedule
the
process
and
a
couplet
like
reconciler,
going
in
pressure
pull
direction.
So
the
cost
of
registry
that
we
started
them
no.
F
The
registry
that
we
already
have
now
is
quite
useful,
I
think
internally,
we
bike
shed
our
own
thing.
That's
the
exact
same,
but
the
idea
of
like
a
inventory,
API
assignments,
kind
of
missing,
like
an
abstract
I,
have
a
bundle
of
stuff.
I
want
to
be
able
to
have
a
standardized
need
ID
for
it
pull
it
and
maybe
write
back
some
replacement
status,
it's
kind
of
an
absent
thing,
and
that
makes
sense
something
to
build
towards
so.
B
F
So
the
simplest
way
to
do
the
bundle
of
stuff
is
it's
an
array
of
objects
whatever
they
may
be,
so
the
idea
being
that,
like
an
app,
can
be
any
combination
of
things
and
you
might
even
have
something
like
a
bundle
of
our
back
that
isn't
an
app
per
se
opinionation
at
this
level
about
like
a
multi
cluster
deployment
or
services,
it's
just
some
stuff.
Okay,.
B
F
A
B
F
The
idea
I
didn't
fully
articulate
the
moving
things.
I
was
worried.
That
was
too
premature,
but
the
kind
of
placement
idea
was
you
give
it
abstract
scheduling
constraints
which
could
be
as
simple
as
cluster
labels
like
that's
as
far
as
we
go
with
lifts
infrastructure,
but
maybe
you
could
get
way
more
advanced
with
topology
or
like
Co
scheduling
concerns,
and
then
you
just
kind
of
can
get
stuck
there.
So
you
have
your
abstract
criteria
and
then
something
writes
back
what
the
concrete
scheduling
decision
winds
up
being.
B
A
Is
there
value
in
thinking
of
like
of
separating
the
scheduling
element
from
like
post
scheduling,
like
transport
of
those
resources,
once
they've
been
scheduled
and
writing
back
the
status
part
like
I,
think
I
think
actually,
if
it's
quite
valuable
to
like
understand,
if
we're
on
the
same
page
about
the
mechanics
once
scheduling
assignments
been
made
and
and
what
do
those
look
like?
How
do
those
work,
Munir
special
cases
for
certain
types
of
resources?
Is
the
mechanism
cube,
control,
apply,
etc,
etc.
F
Yeah
I
pointed
out
that
cou
control
apply
on
Jenkins
this
kind
of
the
industry.
The
player
standard,
I
think
the
benefit
explaining
what
the
scheduler
is.
I
see
it
as
the
most
likely
thing
for
people
to
want
to
write
themselves,
even
if
a
generic
version
existed
because
everyone
is
going
to
have
probably
some
fairly
special
criteria.
F
That
would
take
a
very
long
time
to
get
a
universal
config
set
right,
whereas
the
idea
of
reconciling
something
has
a
lot
fewer
knobs
that
people
would
realistically
touch,
and
that
would
be
a
lot
easier
to
take
an
off-the-shelf
thing,
especially
you
can
just
say,
I,
want
the
push
one
or
I
want
the
pull
one
yeah.
So.
C
C
F
The
important
part
like
this
broad
architecture,
it
tends
to
exist
in
most
multicolor
cluster
setups
right
now,
but
the
important
part
is
making
it
kind
of
piece
by
piece
and
making
sure
that
we
start
with
the
easiest
pieces
to
be,
unlike
encompass
all
these
cases
on
and
then
kind
of
build
onwards
from
there
yeah,
because
everyone's
kind
of
deciding
factor
between
what
vertically
opinionated
stack
they
want
to
use
as
well.
I
had
all
this
info
to
this
so
I'm,
going
to
just
write
my
own
from
the
ground
up
right.
A
C
C
Mean
I
guess:
I
can
see
independent
value
there
about
how
you
got
the
resource.
There
it'd
be
useful.
It
lets
you
try
to
figure
out
how
you
actually
create
it.
Cuz
I.
Imagine
like
you
can
imagine
like
planes
on
your
head.
People
are
gonna
say
like
well:
I
wanna
be
able
to
substitute
this
one
value,
and
it
would
give
you
something
to
experiment
with
for
how
you
can
produce
the
actual
thing
that
you
want
all.
C
Like
you
can
imagine
this
related
that
you
know,
you've
mentioned
lots
of
different
stack
products.
Have
it
there's
one
called
like
a
sync
set
and
open
shift,
we've
seen
other
ones
externally.
Someone
on
here
just
talked
about
something
called
fleet
right,
you
compare
it
against
hive
or
or
some
of
the
other
offerings
from
different
vendors.
B
B
A
Posing
an
open
question
so,
for
example,
Valerie
I
think
your
write-up
was
really
good,
but
it's
probably
less
than
tailed.
Then
we
would
done
like
an
API
definition
that
people
could
implement
again
so
like
one
next
step,
could
be
writing
up
an
API
and
like
coming
to
a
consensus
at
least
that
that
API
is
like
described
fully
enough,
that
we
could
prototype
different
implementations.
B
So
I
know
in
the
agenda.
You
know
Jeremy's
going
to
talk
a
little
bit
about
the
multi
cluster
services
stuff
in
the
second
half
and
I
actually
have
to
drop
off
at
the
half
hour,
so
you're,
all
on
your
own,
but
I
think
that
was
a
really
interesting
approach
to
it.
It
was
like,
let's
try
to
build
an
implementation,
to
see
what
works
and
then
extract
the
basics
from
and
what
we
ended
up
with
is
an
API
that
is
implementable
in
many
ways.
So
I
would
love
to
see
a
similar
approach
here.
F
B
Like
hack
and
slash
to
prove
that
the
basic
idea
holds
water
and
then
see
what
you
think,
the
API
and
semantics
of
it
might
sort
of
look
like
and
see,
sort
of
how
reductio
ad
absurdum,
you
can
make
it
and
then
say
well.
This
is
what
you
can
assume
and
then
people
can
build
different
implementations
on
top
of
that
assumption.
And
then,
if
the
assumptions
not
sufficient,
then
we
so.
A
E
A
A
A
Cool
well,
it
sounds
like
maybe
we're
talked
through
then
for
now.
As
far
as
next
steps
and
this
SIG
meets
every
other
week
currently,
so,
hopefully
that's
a
good
amount
of
time
to
to
maybe
come
back
and
and
demo
like
some
haka,
where
that
should
be
yep
fantastic
cool
thanks
everyone.
Thank
you
appreciating
Jeremy
I
think
you've
got
the
next
two
agenda
items.
A
A
G
G
Last
meeting
two
weeks
ago
we
had
talked
about
adding
it
to
the
to
the
community,
reposted
multi
cluster,
and
so
there's
a
PRF
and
I
think
we
were
going
to
approve
by
lazy
consensus
after
about
a
month
given
over
crazy
craziness
right
now
so
they're
about
two
weeks
and
I
was
gonna,
see
two
more
weeks
till
soon,
like
there's
no
plan.
What
are
we
thinking.
G
G
C
I
had
a
cartwheeling
child
I,
just
don't
talk
about
it.
I
I
would
like
to
have
like
at
least
a
day
to
finish
reading
through
what
you
got
updated,
but
I
think
that
we
ended
up
in
a
spot
where,
like
it
sounded
like,
we
all
agreed
on
what
it
meant
and
I
just
wanted.
A
little
clarification
before
merging,
so
I
think
I
got
added.
I
will.
G
G
So
that's
that's
that
then.
The
other
thing
so
I
put
up
a
draft
that,
based
on
that
discussion,
that's
been
having
in
that
multicultural
services,
API
Docs
for
a
while.
It's
also
got
some
good
comments,
but
I
think
you
know
that's
a
lot
more
involved
and
and
could
use
more
in,
but
I
want
to
go
see
what
that's
for
was
for
either
Reiko
meetings
or
actually
forming
a
working
group
around
the
bus
services
API.
You
know,
maybe
we
could
meet
alternating
weeks
with
this.
A
G
Torn
as
well
I
think
if
you
met
weekly
we'd
have
enough
bandwidth
to
talk
about
it.
I
think
every
two
weeks
is
tough
to
to
kind
of
get
enough
momentum
and
you
know
offline
works,
but
but
a
meeting
is
a
good
forcing
function
to
actually,
you
know
spend
time
writing
down
thoughts.
Things
like
that.
G
So
so
it's
helpful
I
would
be
completely
happy
to
discuss
and
this
meetup,
but
also,
if
there's
people
who
are
less
interested
in
that
you
know
and
there's
going
to
be
more
focused
and
that's
where
a
break
that
makes
sense,
but
I
don't
know
that
it
needs
to
be
a
working
group.
Like
you
know,
any
communication
so
soon
said
that
she's
on
the
cig,
multi,
cluster
arias
and
I.
Imagine
people
wanna
track
what's
happening
by
just
another
meeting.