►
From YouTube: Kubernetes SIG Multicluster 20180717
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
did
put
only
one
item
on
today's
agenda.
That
was
this
Thursday
there's
a
community
meeting.
So
we
need
to
provide
the
sig
update,
which
might
probably
be
an
update
about
three
sub
projects
that
we
are
running
so
I
can
provide
so
Quinton
I
guess
signed
up
to
my
acquaintance
that
you
would
be
giving
up
that
update
in
the
community
meeting.
I
can
provide.
B
B
A
So
I
can
provide
you
on
our
you
probably
already
know
about.
What's
going
on
in
foundation,
I
can
think
of
that.
You
may
be
today
or
tomorrow
sometime
later
for
the
Federation
update,
but
for
kube
MCI
and
cluster
registry
I
guess
we'll
have
to
buy
the
I
mean
if
somebody
is
representing
either
of
those
projects
in
this.
Today's
meeting.
B
C
B
B
C
Yeah
quicken
4
cluster
registry
I
think
the
main
things
as
Marie
was
saying
is
the
CR
debased
work
and
also
making
the
cluster
registry
namespace
scoped,
and
the
introduction
of
a
new
kind
of
canonical
name
space
to
store
those
sort
of
namespace,
but
globally
define
clusters
and
that's
like
pink
cube.
Multi
cluster
public.
B
Ok,
yeah
I,
don't
know
that
we
need
to
go
into
that
level
of
details,
but
this
is
just
like
a
five
minute
update,
a
couple
of
minutes
of
which
is
just
telling
everybody
who
we
are
and
what
we
do
and
then
giving
a
very,
very,
very
brief
status
on
the
project.
But
yeah
I.
Don't
think
we
need
to
go
into
the
level
of
detail
of
namespaces
and
grunts.
A
Okay,
moving
ahead,
I
did
not
have
any
other
item
on
today's
agenda,
except
what
we
generally
do
is
if
we
do
not
have
any
other
item,
one
of
the
representatives
from
the
project
might
provide
the
update
of
that
particular
project.
I
see
that
there
is
nobody
from
Google
in
today's
meeting.
They
would
generally
provide
update
about
Cuban,
CI
and
Russian
industry,
but
honestly
I
guess
Ivan
can
also
just
update
I
think
he
want
you
already
dead.
We
can
for
the
benefit
of
the
folks
who
are
in
this
meeting.
C
No
I
think
that's
pretty
much
the
cost
of
registry.
The
latest
version
includes
the
CRD,
but
there
is
no
validation
for
some
of
those
fields.
Yet
that's
something
that
will
probably
look
at
addressing
in
the
future,
but
aside
from
that
is
just
the
crt-based
work,
along
with
the
namespace
scoped
cluster
registry.
So
clusters
are
now
namespace
scoped
and
for
the
canonical
namespace.
It's
a
belief,
cube
Multi,
cluster
public
and
that'll
be
kind
of
a
way
of
storing
global
clusters
in
a
namespace
scoped
fashion
that
those
are
the
main
updates.
C
D
D
B
B
D
B
B
D
B
A
Okay,
I
believe
the
screen
would
be
visible
now
and
what
we
did
like
as
the
last
exercise.
After
cutting
out
than
for
release,
we
did
a
little
bit
of
brainstorming
as
the
next
walk
items
that
as
a
verb
troupe,
they
might
be
focusing
on.
So
you
can
probably
see
the
list
of
those
items
with
some
some
high
level
priority
or
left
priority
that
we
are
trying
to
assign
to
each
of
those
items,
and
currently
the
two
major
stakeholders
who
are
participating
in
this
effort
are
guava
and
Red
Hat.
A
So
there
is
an
interest
per
feature
also
also
depicted
in
each
of
the
feature,
so
I
quickly
go
through
the
list,
so
what
one
of
the
goals
for
Hawaii
was
achieving,
which
a
parity
with
Federation
v1.
So
as
Quintin
mentioned
that
there
is
some
discrepancy,
we
do
not
have
the
high
level
job,
scheduling
and
expiation
dealing
with
checks
actually
exist,
and
even
in
the
Alpha
release,
we
are
actively
working
on
that
and
very
soon.
We
will
have
that
also.
A
A
C
D
D
A
D
A
So
I
did
put
a
link
to
the
notes
in
the
chat
also.
So
there
is
one
item
that
we
think
is
interesting
important
for
us.
That
is
currently
we
have
placement,
which
is
pearl
resource
placement
resource,
for
example.
If
there
is
a
federated
secret,
we
have
a
federated
placement
resource
and
there
is
a
list
of
cluster
specified
in
that
we
n
research,
that
that
is
too
much
of
work
for
a
user.
So
there
are
couple
items
on
that.
A
So
there
is
one
which
is
named
space
based
placement,
so
a
placement
resource
can
be
applied
to
a
namespace
and
whichever
whichever
resource
exists
within
that
namespace
can
be
placed
in
the
clusters
which
are
correspond
to
that
placement
for
a
namespace.
The
one
which
is
the
item
currently
highlighted
on
your
screen
is
a
label
selective
base
placement
so,
rather
than
specifying
the
list
of
clusters,
user
could
actually
specify
a
selector
label
on
multiple
resources
which
can
then
map
to
a
single
casement
resource.
A
So
this
is
like
little
bit
of
abstraction
provided
to
the
user.
There
is
one
feature
about
colocation
affinity
or
anting
affinity,
which
I'll
skip
that
for
now
it
might
confuse
a
little
bit
so
namespace
base
placement.
We
did
talk
about
federated
status.
Lot
of
people
actually
have
shown
interest
in
federated
status,
so
for
each
gate
as
resource
there
is
a
federated
source,
and
there
is
this
demand
as
to
how
exactly
the
status
of
the
k8s
resources
be
depicted
in
the
Federation
and
the
status
of
the
particular
resource
shown
to
the
user.
A
So
this
is
about
that.
So
the
next
item
is
about
auth,
so
author
also
has
different
takes.
So
one
is
first
level
of
authenticity
to
be
authenticated
to
the
Federation
control
plane
which,
after
the
API
server
getting
installed
as
a
CID,
becomes
a
no-brainer
like
it's
the
same
as
the
Kate
s,
API
server
authorization.
The
second
is
whatever
clusters
are
joining
to
this
Federation.
A
What
user
impersonation
should
happen
for
those
clusters,
like
a
user,
could
have
unlimited
access
to
a
cluster
with
the
control
plane,
condition
control
train
is
installed
but
might
want
or
might
need
a
different
level
of
access
to
the
clusters,
which
are
part
of
this
Federation.
It's
these
two
features
some
scale
as
part
of
that
rolling
update
of
a
federated
workload
would
be.
A
It
is
similar
to
a
rolling
update
of
a
deployment,
so
the
partner
needs
to
update
parts
which
are
part
of
the
deployment
similar
to
that
we
can
envisage
that
a
federated
resource
basically
has
resources
all
the
chaos
resources
created
in
individual
federated
clusters.
So
a
rolling
update
of
competitive
the
source
would
be
updating
those
individual
resources
in
each
cluster
in
a
similar
fashion,
and
if
say
one
by
one
or
in
parallel,
those
kind
of
things
might
be
the
strategies
backup
of
that
update.
A
Next
couple
of
items
are
we
have
a
federated
ingress,
so
earlier
in
Federation,
be
one
predator.
Ingress
was
something
which
was
a
slightly
specific
implementation
for
GC
cluster
Ingres,
basically
work
them
in
DC
clusters.
We
don't
really
have
a
very
clear
solution,
a
generate
solution
for
all
cloud
providers,
but
that's
what
probably
we
need
to
think
about
and
maybe
implement
this
feature
and
lot
of
users
actually
have
been
showing
interest
in
this
feature.
A
There
is
this
feature
which
is
a
federated
quota
at
this
time.
We
are
thinking
only
about
having
a
simple
for
application
of
whatever
quota
resources
created
in
Federation,
and
this
for
now
is
sort
of
doable
only
using
the
configuration
youngins
like
earlier.
You
mentioned
that
it
is
now
possible
to
federate
a
particular
resource,
a
simple
condition,
kind
of
a
thing.
Without
writing
any
code.
A
A
This
is
a
feature
class
resources
read
access
from
Federation
is
about
the
user
of
Federation
needs
to
know,
probably
of
all
the
resources
which
are
in
the
federated
clusters
so
which
way
to
implement
it's
not
very
clear
right
now.
There
are
different
mechanisms
of
doing
the
same,
but
this
is
this
also
or
to
some
extent
coincides
or
overlaps
with
the
Federated
status
feature
where
the
user
is
interested
in
knowing
the
statuses
of
different
resources
in
the
Federated
questions.
A
B
A
Okay,
the
next
item
is
about
budgeting,
the
storage.
This
is
similar
to
the
federated
quota
and
the
demon
sets
where
we
basically
need
to
enable
the
armies
and
have
some
tests
around
them.
There
are
possibilities
of
more
complicated
setup
where
a
federated
storage
might
be
needed,
but
we
are
leaving
that
discussion
for
later.
Okay,
as
part
of
the
alpha
release,
we
actually
have
a
feature
where
a
user
can
have
DNS
discovery
of
service
of
edited
service
and
a
DNS
discovery
across
clusters
of
the
services
which
are
created
in
the
federated
clusters.
A
A
Next
yeah,
those
are
the
main
features.
I
think
there
are
some
application
here:
namespace
base
replacement,
shuttling,
same
resources
across
the
fence,
so
yeah.
These
were
mainly
the
list
of
what
we
actually
did
discuss
yesterday
and
we
might
refine
this.
We
might
place
these
as
what
people
be
doing
in
as
next
items
and
maybe
really
put
some
resources
or
put
some
issues
there.
I
put
Help
Wanted
tags
also,
so
people
who
are
interested
might
have
some
concrete
items
to
take
onto.
As
of
now,.
A
A
A
Federation
workgroup
meeting
notes
right
here:
yeah,
that's
it
from
my
side.
If
there
is
anything
else
any
of
the
folks
here
in
the
meeting
who
might
want
to
talk
about,
we
can
move
on
to
the
next
or
if
there
are
questions
about
what
I
just
described.
We
can
take
that
also
we
have
enough.
Thank
you.