►
From YouTube: Kubernetes SIG Multicluster 2021 Nov 30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
You
scared
me
a
little
bit.
I
was
looking
the
other
way
and
I
jumped
well.
You
better
expect
the
unexpected
buddy.
C
D
A
E
Screen
and
hear
me
clearly,
yes,
yep.
Thank
you,
I'm
mike
a
developer
from
red
hat.
First
of
all,
I
want
to
thank
the
community
for
giving
me
an
opportunity
today
to
do
a
quick
overview
and
demo
of
the
work
api
project.
Before
I
get
started.
I
want
to
mention
that
this
project
was
created
and
mostly
contributed
by
key
chang,
who
is
also
a
developer
from
wet
hat,
but
unfortunately,
due
to
the
time
zone
differences,
he
won't
be
able
to
make
it
to
this
meeting.
E
So
the
work
api
is
basically
a
common
api
to
distribute
workload
to
multiple
clusters.
Some
background
information.
First
of
all,
this
work.
Api
project
was
mostly
inspired
by
valerie
lansey's,
multi-cluster
workload,
blog
pokes
and
the
prototype,
which
basically
describes
a
google
set
of
resources
and
apply
them
to
one
or
more
kubernetes
clusters
as
work
or
workload.
E
As
of
now
we're
keeping
the
api
simple
and
it's
only
responsible
for
describing
the
list
of
work
and
needs
to
be
deployed
to
a
single
cluster,
some
terminal,
the
motivation
that
we
want
to
do
this
is
allows
developers
to
easily
integrate
with
other
source
of
truth,
for
example,
githubs
or
use
other
cube
api
servers.
E
We
made
it
simple
so
that
it's
easy
to
integrate
with
other
placement
primitives
and
then
the
goal
is
to
have
a
cr
that
basically
tracks
which
cluster
a
particular
workflow
is
deployed
to
and
then
tracks
deploy
resources
on
the
cluster,
so
that
you
know
the
control
loop
can
do
a
garbage
collection
on
those
resources.
E
Some
terminology
that
I'll
keep
using
over
the
course
of
this
presentation,
the
workload
the
work
hub
is
a
place
where
their
work
api
resides.
It
doesn't
have
to
be
a
kubernetes
cluster.
It
can
be
a
rpc
server
or
cloud
api
depending
on
the
implementation,
but
for
the
rest
of
the
presentation,
and
the
demo
is
assumed
that
the
workcup
is
a
cluster
on
the
spoke
cluster
or
sometimes
called
manage
cluster.
E
A
quick
overview
for
more
details
visit
the
get
repo
please
a
work
crd
is
to
represent
a
list
of
cube
api
resource
resources
to
be
deployed
to
a
spoke
cluster.
The
work
cr
is
created
on
the
workup
and
sits
in
the
name
space
that
the
word
controller
has
authorized.
Access
to
creation
of
the
work
on
the
world
cup
means
the
resources
defined
in
the
work
will
be
applied
to
the
export
cluster.
E
Update
of
the
work
will
trigger
the
resource,
updates
and
deletion
of
the
work
will
delete
the
resources
and
garbage
cleanup
on
the
spoke
or
manage
cluster.
Just
a
side,
quick
side
note
deletion
is
not
yet
implemented
in
the
current
project.
Reference
invitation,
part
of
this
presentation
and
demo
is
hopefully
we
gather
some
interest
from
the
community
and
we
improve
on
the
api
specifications
such
as
the
lead
policy.
For
example.
E
Looking
at
the
diagram
we
can
see,
there
are
multiple
smoke
clusters,
so
multiple
work
agent
controller
will
be
running
that
monitor
the
work
api
in
the
same
or
different
name
spaces.
On
the
word
cup,
it's
totally
possible
that
the
multiple
work
controller
watches
the
work
in
just
one
namespace.
E
On
the
workup
and
deploy
the
resources
to
multiple
spoke
clusters
in
the
diagram,
as
well
as
the
current
weapons
implementation,
I'm
judging
by
the
arrow,
that's
pointing
it's
using
a
pool
model
where
the
spoke
cluster
has
an
agent
running
that
watches
the
apis
defined
on
the
world
cup,
fetches
them
and
apply
them
locally
on
these
smoke
clusters.
E
A
push
model
can
also
be
implemented.
A
push
means
that
the
controller
on
the
hub
watches
the
api
define
the
workflow
and
pushes
the
resources
to
the
managed
cluster.
There
was
an
active
community
discussion
over
push
versus
pool.
E
As
of
now
the
work
api
doesn't
restrict
you
to
use
either
of
the
models
it
seems
like
the
general
consensus
is.
The
pool
model
has
some
advantages,
such
as
a
lease
amount
of
external
attack
services,
because
all
the
smoke
cluster
doesn't
need
to
be
exposed
and
they
just
need
to
communicate
with
one
hub
cluster,
but
maybe
the
implementation
for
the
pool
model
is
more
complicated
than
push
so
there's,
plus
and
minuses,
but
again
the
work
work.
Api
spec
doesn't
restrict
the
users
to
use
either
push
or
pull.
E
So
looking
at
the
current
spec,
not
everything
is
displayed
on
the
screen
feel
free
to
visit
the
github
repo
for
more
details.
The
work
spec
basically
just
defines
and
describes
the
list
of
workloads
to
be
pushed.
I
mean
to
be
deployed
applied
to
the
smoke
cluster
again
there.
There
are
some
detailed
discussion
around
if
we
should
use
the
one-time
wire
extension
as
a
way
to
wraps
all
the
workload,
because
that
really
opens
up
the
the
api
and
that's
another
topic
that
we
hope
the
community
can
provide.
E
Some
feedback
on
the
current
status
is
pretty
straightforward.
It
describes
the
progress
of
the
work
being
applied
as
well
as
for
each
individual
workloads.
E
It
can
describe
the
the
resource
status
on
the
smoke
clusters
regarding
on
the
status
showing
on
the
smoke
clusters,
because
there's
a
scalability
issues,
so
there
has
been
some
discussion
of
how
we
can
limit
the
amount
of
data
to
show
on
on
the
the
work
status
as
well.
So
that's
another
area
where
we
were
interested
in
gas
on
community
feedback.
E
Here's
an
here's,
an
example
of
the
spec
we
can
see.
There's
a
in
this
fact
there's
a
workload
with
a
deployment,
and
then
we
define
all
the
deployments
back
here.
E
So
I'm
going
to
do
a
quick
demo
right
now
with
the
current
reference
implementation
of
the
work
api.
E
The
reason
why
I
keep
mentioning
that's
a
weapons
implementation
is
because
there
are
other
consumers
that
uses
the
work
api,
enhances
it
and
have
been
running
on
production,
so
some
example
is
open
cluster
management
and
as
well
as
in
our
project
called
kmarta
and
both
have
their
own
implementation
of
this
work.
Api
with
some
enhancements
and
and
both
communities
are
looking
to
contribute
back
to
this.
E
This
project
is
really
about
coming
to
consensus
and
define
the
api
specs
based
on
community
feedback,
but
getting
back
to
the
current
work
api
reference
implementation
on
on
my
left
side,
I
have
a
time
newly
created
kind
cluster.
That's
set
up
to
work
as
the
hub
on
the
on
the
right
side.
I
have
a
another
newly
created
kind
cluster.
That's
set
up
to
work
as
the
spoke
cluster.
I
have
the
agent
at
the
work
agent
deployed
already
on
the
smoke
cluster
to
save
some
time,
and
I
have
to
work.
E
Cld
apply
on
the
on
the
word
code.
E
So
I'm
gonna
be
applying
this
example.
This
work
example
on
the
hub
and
hopefully
everything
works
out,
and
then
it
will
deliver
to
the
small
cluster
in
this
workload.
There's
there's
two
resource
types:
there's
the
deployment
and
the
service
defining
the
manifest.
E
So
we
can
see
that
it's
already
applied
successfully
and
both
resources
seems
to
both
resource
deployment
seems
to
be
fine,
and
if
you
check
on
the
spoke
cluster
and
the
deployment
is
there
now
and
the
service
is
there,
so
that
was
a
quick,
really
quick
demo
of
the
webinar
implementation
of
the
work
api
once
again,
we're
we're
really
looking
for
more
feedback
from
the
community
and
please
provide
some
feedback
in
the
in
the
discussion
in
a
community
discussion
with
that
being
said,
sorry
that
I
went
a
little
bit
fast.
A
Did
my
audio
go
out?
Can
folks
hear
me.
A
Okay,
so
one
thing
that
I
wonder
is
like:
when
we
talk
about
being
judicious
about
status,
updates
like
of
deployed
resources,
I
think
there
are.
There
are
a
couple
axes.
One
is
how
much
of
the
status
do
you
put
there
like?
Do
you
pick
like
key
conditions
like
ready
and
another,
or
do
you
put
everything
there
or
do
you
try
to
do
something
smart
right
and
then
another
dimension
would
be.
A
How
frequently
do
you
do
that
where
the
chattiness
and
frequency
of
communication
is
probably
something
that
has
like
a
pretty
outsized
effect
on,
like
overall
network
traffic
back
to
the
work
hub,
and
I
I
wondered
if,
like
folks,
were
thinking
about
those
different
axes
yet
and
like
sort
of
where
the
conversation
landed
so
far
on
it.
E
So
there
was
a
proposal
from
hong
kong
who
is
the
one
of
the
implementer
of
the
work
api
on
the
kamada
project.
E
He
seems
to
want
to
connect
collect
most
of
the
most
of
the
the
resource
specs,
but
based
on
our
discussion
is,
it
seems
it
might
be
a
little
bit
too
heavy
to
collect
all
the
resource
specs
so
coming
from
another
community
open
customer
management,
qgen
has
suggested.
E
What
is
the
status
that
you're
looking
for
through
some
either
templating
or
some
json
arguments,
so
that
will
limit
the
the
amount
of
data
that
and
the
data
that
you
really
wanted
on
the
on
the
world
cup
as
as
sort
of
as
for
the
frequency
as
well,
so
that
that
is
another
another
topic
for
discussion
where
we
know
it
should
probably
not
be
a.
It
should
be
more
of
a
periodic
thing
instead
of
the
constantly
update,
but
that
we
haven't
gotten
into
that.
E
Yet
because
we
haven't,
we
haven't
flushed
out
the
the
status
to
display.
Yet.
A
Are
there?
Are
you
all
having
like
conversations
about
this,
that
other
folks
can
join,
because
I
think
you
know,
since
it
is
a
subproject
of
the
sig,
if
I
remember
correctly
like
it
will
be
best
to
have
those
conversations
in
public
and
you
know
make
sure
that
people
are
able
to
join
as
they
have
interests.
Like
you
know,
the
the
more
of
more
able
people
are
to
join
the
more
likely
that
you'll
get
additional
contributions.
E
Right,
we're
drafting
a
a
proposal-
enhancement
documents
right
now
to
mention
the
status
that
the
status
concern
and
the
delete
policy.
So
once
those
proposals
will
be
up,
updated,
we'll
post
in
the
channel
and
maybe
come
on
in
other
community
meetings,
where
we
can
openly
share
the
proposal
similar
to
this
document,
and
then
we
can
discuss
what
how
to
move
forward
with
that.
B
F
Thanks
actually
that
yeah,
that
was
kind
of
my
that
second
part
was,
was
my
question
too
yeah.
I
think
this
is
gonna,
be
really
interesting
conversation
because
status
like
obviously
you
want
as
much
status
as
you
can,
the
faster
it
updates
the
more
interesting
tooling
it
seems
like
you
can
build,
but
obviously,
if
you
have
a
bunch
of
clusters
that
are
feeding
status
and
and
it's
super
chatty,
you
can
put
a
lot
of
load
on
an
api.
So
yeah
that's
going
to
be
a
really
interesting
conversation.
E
I
agree
and
we're
working
on
the
draft
proposal.
Hopefully
I'll
be
uploading.
F
Great
yeah,
when
you
have
that
I
guess
sharing
it
with
that
list,
would
be.
It
would
be
a
great
first
step
and
then
and
then
yeah
another
another
community.
Meetup
like
this
would
be
a
probably
a
really
good
place
to
dig
in.
A
Okay,
so
it
sounded
like
k.
Armada
has
started,
adopting
it
like
how
about
how
about
open
cluster
management.
I
remember
that
there
was
like
a
a
an
earlier
form
of
like
work.
Work.
Api
like
thing
in
rackham
product
is,
is
work
api
being
used
in
rackham.
E
Yeah,
yes,
josh,
is
it
josh
josh
go
ahead.
E
And
awesome
awesome
and
the
api
that
both
open
custom
management
and
cloudera
uses
it's
very
similar
to
the
work
api.
But
there
are
some
enhancements,
such
as
garbage
collections
and
some
status
update
as
well,
but
we're
we're
trying
to
we're
trying
to
basically
come
to
a
consensus
agreement.
And
then
we
want
to.
E
A
Thank
you
very
much
for
for
giving
a
readout
on
this
look
forward
to
next
steps.
A
I
want
to
say
that
that
is,
I
can't
say.
A
C
Did-
and
I
have
I
want
to
share
these
gifts
because
I
love
them,
so
let
me
share
my
screen.
Basically,
I
just
wanted
to
give
some
updates
of
some
stuff.
That's
been
going
on
like
some
pr's
that
have
been
merged
and
stuff.
In
the
background,
sorry,
my
dog
is
nearby,
so
hopefully
this
is
not
too
distracting
these
guys.
Let
me
oh,
this
is
my
wrong
itself.
C
I
still
can't
do
it.
Sorry,
okay,
we'll
just
leave
them
dancing
there
for
now.
Okay,
so
I
just
wanted
to
give
some
updates,
especially
on
caps
that
I've
been
working
on
so
mainly
on
the
cluster
id
cap.
There
was
a
pr
merge
to
finally
change
the
name
cluster
claim
to
cluster
property,
which
was
like
something
we
voted
on
a
while
ago.
So
you
are
now
you
will
now
sound
old
school
if
you
use
the
word
cluster
claim
so
make
sure
that
you
do
that
update
your
lingo
there.
C
Also
there
was
a
production
readiness
review
needed
for
the
cluster
id
cap.
That
was
kind
of
more
of
a
like
pr
light,
because
this
is
out
of
tree,
but
there
there's
some
like
automation
that
like
requires
that
for
kept
moving
to
implementable
now,
so
there's
a
pr
merged
about
that.
C
And
then
long
ago
we
talked
about
upgrading
some
graduation
criteria
for
the
mcs
api,
for
example,
taking
out
the
explicit
requirement
for
cube
proxy
implementation,
that
about
directly
interpreting
service
imports
and,
like
tying
the
cluster
id
api
graduation
more
closely
to
the
graduation
scale
for
the
ncs
api
cap,
so
anyways
that
stuff
also
is
officially
in
there
too.
C
So
progress
continues
on
some
of
those
logistical
things.
For
example,
next
step
for
cluster
id
is
to
submit
for
an
api
reviewer
specifically,
but
then
all
of
these
gophers
here
are
because
I
wanted
to
boost
jeremier's
work.
I
didn't
see
him
on
the
call
when
I
last
looked
at
the
participant
list,
but
I
just
want
to
boost
that.
He
worked
a
lot
on
a
multi-cluster
dns
plug-in
for
core
dns,
and
this
is
or
was
a
beta
graduation
blocker
for
the
mcs
api.
C
So
really
appreciate
you
working
on
that,
and
hopefully
people
can
go,
take
a
look
at
it.
If
you
want
to
see
it
and
you
can
enjoy
these,
these
go
first
for
now,
I
probably
will
delete
them
after
this,
because
it's
probably
not
great
for
everybody's
download
speeds.
For
me
to
have
a
billion
gifts
in
here,
but
just
wanted
to
to
boost
that
work.
So
thanks
to
jeremier
and
absentia,
if
you
are
not
here
for
that
so
yeah
those
were
those
were
my
updates
and
my
gif
update.
F
Awesome,
that's
that's.
Super
exciting
yeah
really
really
excited
about
that.
That
dns
plug-in
also
love
the
puppy
store.
C
I
know
I'm
like
I'm
so
professional,
but
actually
my
dog
is
just
like
honking
back
here.
Sorry,
it
sounded
like
a
troll.
B
A
F
F
Yeah
very
much
excited
to
finally
get
to
mcs
graduated
to
beta
and
then
seems,
like
I'm
gonna
naively,
say
a
short
trip
to
ga.
After
that.
A
All
right
well
excellent
session
today,
I
think
that's
the
end
of
our
agenda,
so
I'll,
just
maybe
I'll
close
by
saying.
We
really
want
to
hear
ideas
that
people
have
problems
you're
struggling
with,
to
the
extent
that
you
can
and
are
willing
to
share
things
that
you're
independently
developing
like
please
don't
hesitate
to
throw
something
on
the
agenda
if
there's
something
you
want
to
talk
about
or
share
with
us,
and
everybody
have
a
great
rest
of
your
day.