►
From YouTube: Kubernetes SIG Multicluster 20181009
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
B
Okay,
that
would
be
great.
The
I
suspect
the
the
item
about
the
Charter
is
like,
basically
just
a
matter
of
folks,
reading
the
pull
requests
which,
admittedly
I
updated
the
wrong
branch
and
only
realized
that
I
hadn't
updated
the
correct
one
earlier
today's.
So
if
you
took
a
look
and
we're
like,
why
is
this
on
the
agenda
when
it
hasn't
been
updated
in
a
month?
That's
why
and
it
is
now
updated.
A
Yeah
we
had
already
started
the
meeting
just
discussing
the
two
outstanding
pull
requests,
both
the
sig
charter
PR
and
the
clock
cluster
connection
API,
which
relates
to
cluster
Registry,
to
which,
at
least
in
the
latter,
we
myself
and
Jonathan
who's.
Not
here
now
will
commit
to
reviewing
by
the
end
of
this
week,.
B
A
B
Okay,
so
I've
dropped
a
link
into
the
chat.
Basically,
this
is
a
copy
of
the
newer
cig
charter
template
which
I
have
added
a
very
basic
treatment,
tried
to
be
really
specific
about
the
the
scope
of
the
cig,
since
that
was
the
there
was
feedback
earlier
on.
This
pull
request
that
said
that
additional
specificity
was
desired.
B
Essentially,
what
I
did
is
define
the
scope
in
terms
of
the
existing
sub
projects
that
are
nominally
attached
to
the
cig
and
in
doing
so
tried
to
avoid
like
vague
statements
about
the
wider
problem,
space
I
I
felt.
That
was
the
best
way
to
be
really
specific
for
now,
and
if
we
decide
to
alter
the
scope
of
the
sig
in
the
future,
we
can
update
the
charter
as
needed.
B
D
B
D
A
D
I
see
sorry,
I
didn't
noticed
that
aspect
to
it.
Yeah.
If
we're
gonna
have
v1
and
v2
listed
there,
we
definitely
need
to
clarify
which
one
is
supported
and
which
one
is
not
yeah
or
remove
divisions,
which
is
what
I
thought
all-city
done,
but
it
doesn't
look
like
that's
the
current
state.
It's
document.
B
C
D
B
Kind
of
joking
Quinton
I'm,
not
sure
if
you
were
on
when
I
said
this,
but
I
had
I
had
updated
the
poll,
requests
or
I'd
gone
and
made
changes
to
address
the
comments
and
then
pinged
people
tagged
them
in
a
github
comment,
saying:
hey:
take
a
look
but
I
pushed
to
the
wrong
branch.
So
I
think
there
is
still
like
I'd
like
Erin
and
Tim
st.
claire
de
re
review.
B
E
B
B
A
B
A
What
your
exposure
is
and
kubernetes
some
people
have
only
heard
about
cluster
API
and
others,
including
people
in
this,
do
only
know
about
the
cluster
registry.
Api,
so
I
understand
the
source
of
the
confusion,
but,
like
you
said,
I,
don't
know
how
to
directly
address
it
without
being
overly
specific
in
processing.
I
would.
B
What
I'm
thinking
of
in
my
head
is
something
to
the
effect
of
you
know
that
of
another
area
of
the
community.
That
wants
to
use
one
of
these
components,
in
terms
of,
for
example,
writing
to
the
cluster
registry,
API
that
that
should
be
within
the
scope
of
that
part
of
the
community
and
in
the
event
that
you
know,
another
part
of
the
community
wants
there
to
be
changes
in
the
cluster
registry
API
or
another
piece
of
software.
That's
in
the
scope
of
the
sig
that
jointly,
we
should
work
through
it.
D
Sounds
like
we
might
as
a
signe
like
an
FAQ
page
or
something
where
we
can
refer
people
to
for
questions
that
get
asked
regularly
this
this
one
seems
to
come
up
more
than
once.
What's
the
difference
between
the
two
and
where
do
I
go
if
I
want
to
find
out
about
cluster
API
versus
cluster
industry,
that
seems
like
a
reasonable
candidate
to
include
in
an
FAQ
the
other
one
that
comes
up
is
what's
with
the
B
1
and
B
2
stuff
of
Federation.
B
B
A
B
B
D
B
I
haven't
yet
made
any
changes
around
conditions.
I
I
was
hoping
before
I
invested
too
much
additional
effort
in
this
that
we
could
get
some
group
consensus
about
whether
we
think
this
is
the
right
general
API
to
have,
and
if
we
think
so,
then
we
can
hone
down
the
details
about
what
what
the
format
of
status
should
be.
For
example,.
D
B
In
prior
art,
in
the
kubernetes
community
of
reference
to
a
secret
is
not
considered
to
be
escalating,
so
the,
for
example,
like
the
fact
that
secrets
exist
in
space
does
not
inherently
make
the
name
space
itself
escalating
and
resources
that
contain
references
to
secrets
by
name-
and
you
know,
even
to
their
keys,
for
example,
like
anything
with
a
pod
spec
can
do
that.
Those
aren't
considered
to
be
inherently
escalating.
B
We
also
don't
have
a
rich
practice
of
cross
name
space,
api's
so
I
without
a
use
case
that
we
can
point
to
to
store
the
secret
in
another
name,
space
which
honestly
I
think
would
just
introduce
even
more
security
complications
rather
than
then
diminish
them
I.
It
seems
conventional
to
me
as
someone
that's
built,
a
number
of
api's
and
kubernetes
that
that
have
this
kind
of
secret
reference
in
them
that
it
should
be
okay,
but
that's
my
own
subjective
opinion.
A
B
Honestly,
I
think
that
one
element
that
has
some
similarity
to
this-
that
I
can
point
to
elsewhere
in
the
kubernetes
community,
is
not
crossing
name
spaces
but
crossing
from
a
cluster
scope
to
a
name.
Space
scope
and
the
case.
I
can
point
you.
There
is
that
we
had
a
cluster
scope
set
of
API
resources
and
Service
Catalog.
B
Access
review,
performed
as
part
of
admission
on
this
resource
that
verified
that
the
person
creating
the
cluster
scoped
resource
and
naming
the
making
a
reference
to
a
secret
in
a
namespace
had
had
write
permission
on
a
secret
on
the
secret
resource
in
that
namespace.
To
avoid
a
situation
where
somebody
could
steal
the
secret
information
by
creating
a
cluster
scoped
or
this
cluster
service
broker
resource.
B
B
D
B
Yeah
I
would
say:
that's
that's
roughly
accurate
with
the
qualification
that
you
know.
Kubernetes
api
is
don't
have
a
cough
ill,
tur
list
watch,
so
we
can't
constrain
a
collection
of
resources
by
the
permission
of
the
person
trying
to
view
them.
If
you
have,
if
you
have
get
permission,
if
you
have
list
permission,
you've
got
it
or
rather,
if
you
have
lists
permission,
you
have
read
permission
over
the
entire
collection,
so
modular
that
one
fairly
minor
NIT.
That
probably
is
more
representative
of
my
storied
history.
B
E
B
I,
just
flashed
back
to
my
youth,
no
I,
to
finish
my
thought:
it's
very
possible
to
set
up
our
back
permissions,
for
example,
so
that
you
so
that
a
user
can
only
access
which
clusters
are
in
the
cluster
registry.
In
a
certain
namespace,
they
may
only
be
able
to
access
the
clusters
in
cluster
credentials,
but
not
access
the
secrets
themselves.
I
think
that's
what
you
were
asking
about.
Quinton.
A
Yeah
I
feel
like
a
lot
of
what
you're
saying
is
implicit
in
the
API
part
of
this
is
not
your
fault,
for
the
reasons
you
explain,
the
the
freeform
you
know,
kind
of
cube
config
and
the
fact
that
the
cube
config
format
has
not
been
elevated
to
an
API
outside
of
cube
CTL.
To
my
knowledge,
it
makes
makes
it
a
little.
A
But
yeah
in
terms
of
like,
if
I
was
going
to
use,
this
I
would
clearly
have
a
different
actor
or
persona
associated
to
the
cluster
reference
itself
and
a
different
set
of
that
would
be
the
admin,
I
guess,
or
the
user
or
the
controller
and
the
secret
reference
would
probably
be
just
the
controller
and
the
admin
necessarily
the
user.
Of
that
cluster
reference.
D
A
D
So
that
part
I
didn't
understand,
and
maybe
that
I
misunderstand
this,
but
the
way
I
understand
this
could
be
used.
Is
the
cluster
registry
has
a
list
of
clusters
with
clusters
are
all
readable
by
everyone,
presumably
or
approximately?
That
would
be
a
reasonable
use
and
then
there
are
credentials,
many
of
them,
all
of
which
refer
to
the
same
cluster
and
each
one
of
those
cluster
credentials
is
are
backed
to
the
people
who
have
access
to
it.
D
And
if
the
secret
that
is
referred
to
by
the
cluster
credential
is
in
the
same
namespace
with
the
same
credentials,
then
you're
good.
So
in
my
namespace
I
have
a
cluster
credential
which
refers
to
a
cluster
in
the
registry
and
a
secret
in
my
namespace,
and
that's
what
I
use
and
I'm
the
only
one
who
has
access-
or
whoever
has
access
to
my
namespace
has
access
to
those
credentials
would
be
one
would
be
the
canonical
use
case
I'm.
Thinking
of
so
I'm
am
I
missing
something,
or
are
you
worried
about
something
different.
A
B
Many
left,
you
would
have
many
cluster
credentials,
so
I
think
I
think
Christian.
The
the
element
of
this
that
is
the
the
concern,
is
that
there
are
a
lot
of
things
that
the
cube
config
could
contain
and
as
defined
at
this
point,
it's
it's
rather
loosey-goosey.
What's
actually
expected
by
this
API
to
be
in
that
config
file.
A
F
B
I'm
not
sure
that
the
Christian
was
saying
that
he
felt
that
was
a
goal.
I
think
it
was
actually
the
opposite
that
the
way
that
I
understood
you
Christian
was
that
you
that
your
reservations
were
that
there
are
basically
arbitrary
things
that
could
be
contained
in
a
cute
config
file
and
a
the
the
API
has
defined
is
basically
a
reference
to
an
opaque
file
that
we
don't
have
an
API
for.
A
Right,
so
let
me
try
to
be
more
specific
and
see
if
this
applies.
If
I
have
a
cluster,
it's
in
a
single,
it's
in
a
specific
namespace
and
I
have
two
users.
You
can
conceivably
have
a
cube
config
where
one
user
has
an
access
token
and
another
user
has
their
own
access
token.
Now.
Are
we
going
to
have
those
two
users
use
the
same
access
token
for
the
single
cluster
in
the
same
namespace
yeah.
A
B
And
let
me
let
me
ask
this
by
way
of
doing
some
discovery
I.
Would
you
be
more
comfortable
with
this
API
if,
instead
of
an
opaque
reference
to
a
cube
config,
there
were
strongly
typed
or
at
least
explicitly
recorded
fine-grained
expectations
about,
like
smaller
pieces
of
information
than
a
whole
cube
config
file?
That
should
be
in
specific
keys
of
the
secret?
Perhaps
I.
A
E
B
Yep
yeah
I
mean
I
I
had
similar
reservations,
but
this
is
the
kind
of
feedback.
I
was
hoping
to
get
from
a
group
discussion,
so
it
be.
It
would
be
great
if
you
could
comment
on
that,
and
perhaps
I
can
spend
some
time
thinking
about
like
what
the
minimum
usable
amount
of
information
would
be.
That
could
encapsulate
all
the
different.
You
know,
mechanisms
that
might
be
specified
inside
of
cube
config.
E
A
B
There
was
a
feeling
in
the
group
that
it
seemed
quite
odd
to
to
do
that
when
there
is
not
a
way
to,
for
example,
specify
how
you
might
actually
access
one
of
the
clusters
in
the
cluster
registry.
That
is
at
a
very
high
level
like
qualitatively
what
I?
What
I
would
frame
is
the
story
for
where
this
comes
from.
If
that
makes
any
sense,
yeah.
D
I
can
maybe
add
a
bit
of
color
there.
The
origin
was
actually
discussion
primarily
between
myself
and
Jonathan
in
this
meeting
proximately
a
month
ago
and
he's
up
he
wanted
to
promote
cluster
registry
to
beta,
but
he
made
the
observation
that
we
didn't
actually
have
any
users
yet
of
cluster
registry
and
that
made
it
very
difficult
to
validate
whether
or
not
the
API
was
sufficient.
D
So
I
made
the
observation
that
the
Federation
system
used
the
cluster
registry,
but
had
to
build
essentially
a
parallel
cluster
registry
to
contain
the
credentials,
because
otherwise
we
couldn't
use
the
cluster
registry
and
made
the
observation
that
once
you
do
that
you
don't
really
need
the
cluster
registry,
because
you
have
to
have
another
registry
of
credentials
essentially,
and
that
may
explain
why
nobody
was
using
a
cluster
registry.
So,
on
the
back
of
that,
we
decided
that
making
the
cluster
registry
usable
without
having
to
add
additional
stuff
to
it.
D
E
Ok
thanks
just
from
like
looking
at
the
pull
request
with
that
that
really
the
context
of
what
set
of
problems
this
is
trying
to
solve
since,
like
auth
and
credentials
as
Konya,
as
you
mentioned,
kind
of
a
large
blobby
space
of
lots
of
different
options.
It's
hard
to
me
to
evaluate,
if
this
like
solves
since
I,
don't
know
exactly
what
set
of
problems
trying
to
solve
whether
it
does
that
well
or
not.
Yeah.
D
That
very
good
feedback.
So
if
we
did
our
job,
we
would
have
documented
and
noted
what
I
just
mentioned
in
the
meeting
that
it's
from
a
few
weeks
ago.
In
which
case
we
should
cut
and
paste
those
in
so
here
it
is,
if
you
scroll
down
in
the
meeting
notes
to
the
28th
that
last
2028
of
the
8th,
you
will
see
the
last
item
in
the
meeting
notes
about
moving
cluster
registry
to
beta
and
whole.
D
E
E
A
B
B
A
B
Yeah,
it's
it's
to
produce
a
rest
client
that
it
can
be
used
to
well
it's
to
produce
a
rest,
client
and
use
that
rest
client
to
one
hit.
The
clusters
healthy
end
points
so
that
the
status
of
this
resource
can
reflect.
Yes,
this
cluster
is
actually
reachable
from
where
I'm
running
the
controller
and
then
for
other
api's
to
use
cluster
credentials
as
the
sort
of
lingua
franca,
lingua
franca
of
how
I
can
make
a
connection
to
a
cluster.
That's
off
cluster
relative
to
me.
E
Okay,
so
just
say
from
the
outside
perspective,
since
there
are
so
many
ways
that
one
can
authenticate
with
the
cluster
using
various
mechanisms,
it
would
make
like
knowing
what
what
subset
of
those
are
intended
to
be
covered
by
this
by
this
mechanism
would
be
useful
like
if
it's
supposed
to
be
like
comprehensive,
no
matter
what
form
of
off
you're
using
you
could
represent
that
using
a
cluster
credential
I.
Think,
in
my
opinion,
it's
a
little
like
vague
and
undefined
just
like
a
secret.
E
But
if
it's,
if
it's
supposed
to
be
more
narrowly
for
a
subset
of
those
and
we'll
see
like
a
different
mechanism,
if
using
like
a
WS,
authenticating
webhook
that
have
to
be
empty
or
created,
or
something
like
that
like
that,
would
use
a
different
mechanism
to
represent
that
off.
I
think
good
to
clarify
that
in
the
in
the
scope
of
this
could.
D
Just
to
clarify
my
understanding
so
so
right
now,
the
vast
majority
of
interactions
with
kubernetes
clusters
is
done
by
cube.
Config
sorry
keep
control
that
that's
a
question
and
queue
control
uses
a
cube
config
to
get
the
information
and
then
execute
an
authentication
against
those
clusters
in
order
to
interact
with
them
are
those
two
statements
approximately
true.
A
D
All
the
detail
of
that
is
is
encapsulated
in
one
tool:
called
queue,
control
and
one
configuration
for
cube
control,
which
is
queue
config
and
so
I'm
sure
there
are
things
that
it
cannot
do.
I'm
sure
you
could
make
a
cluster
which
cube
config
with
an
appropriate
cube.
Control
was
not
capable
of
authenticating
against,
but
the
vast
majority
of
authentication
is
executed
by
cube
control,
using
cube,
config
and
and
I.
Think
the
scope
of
this
PR
is
to
cover
those
use
cases.
D
A
D
Would
assume
that
the
common
client
code
that
is
used
that
reads
cube
config?
If
it
is
able
to
do
such
a
thing,
then
I
guess
in
a
cluster
it
could
do
the
same
in
a
Federation
control
plane.
It
could,
in
theory,
do
the
same
thing.
We
put
that
binary
there
and
all
those
details
and
whether
that
secures
a
different
fields,
but
in
theory
the
queue
config
that
is
contained
in
the
secret
could
provide
the
same
information
that
it
provides
to
cube
controller
and.
B
Rather
than
having
first-class
fields
for
different
pieces
of
what
you
can
do
in
a
cube
config
that
were
directly
part
of
this
resource,
because
that
would
make
this
resource
escalating,
meaning
that
if
you
had
view
access
to
this
resource,
you
could
you
know
reconstruct
a
cube
config
with
the
same
information
and
and
basically
steal
information
that
should
be
secret.
We
it's
not
desirable
to
add
new,
escalating
resources
and
very
desirable
to
ensure
that
information
that
secret
lives
in
kubernetes
secrets.
B
So
this
may
be
a
question
ultimately
of
being
very
specific
and
possibly
adding
validation
on
the
controller
side
about
what
can
be
in
the
cube.
Config
that's
referenced
here,
but
I
I
do
not
think
that
that
putting
this
information
into
first-class
fields
in
this
API
is
the
right
play
to
make
personally
yeah.
A
I
haven't
heard
the
request
for
that
for
I.
Think
all
the
reasons
right,
however,
cube
config
is
too
freeform
and
I
think
you
have
to
be
much
more
prescriptive
in
what
you
actually
support.
So,
if
I
cast
this
back
to
Federation
v1
or
what
I
remember
from
it
I
mean
the
actual
store,
there
was
a
an
access
token
for
a
kubernetes
service
account.
Now
that
is
something
that
can
be
easily
supported,
at
least
until
kubernetes
introduces
expiry
time
for
access
tokens
which
becomes
more
complicated.
A
But
if
something
like
what
I
was
mentioning
before
about
something
in
cube,
config
requires
client-side
revaluation
and
a
dependency
on
an
external
binary.
That's
a
little
bit
problematic.
Now,
maybe
you
can.
Maybe
this
can
be
turned
into
a
web
hook,
and
you
know
more
and
more
complicated,
but
on
the
surface,
I
would
try
to
narrow
and
be
much
more
prescriptive
in
what
this
supports
for
now,
rather
than
just
relying
on
cube,
config
yeah.
A
At
this
point,
I'm
not
sure
I
would
even
call
it
to
config
I
would
stick
to
one
method
that
we
know
works
everywhere.
That
method
is
kubernetes
service
accounts.
It's
no
surprise
that
that's
the
one
that
Federation
would
be
one
used
just
because
beyond
that
it
becomes
fairly
unwieldy
and
I.
Don't
know
how
you
essentially
see.
E
That
seems
a
little
weird
to
me
is
technically
a
cute
config
file
can
have
a
list
of
like
n
clusters
and
pin
credentials.
So
if
you
have
a
cute
config
file
in
your
crest
cluster
credential
spec
that
has
cluster
endpoints
that
are
different
than
the
endpoints
in
the
in
like
the
cluster,
the
actual
cluster
object.
You
have
this
I'm,
not
sure
what
the
expected
semantics
of
that
would
be.
E
E
And
the
comments
I'm
making
to
the
Porticus
okay.
D
Just
had
one
last
thought
about
this
cute
config
stuff
to
throw
out
there,
so
so,
at
the
end
of
the
day,
someone
is
going
to
put
stuff
in
that
secret
and
right
now
we're
saying
it's
a
cute
config
file
and
there
then
somebody
is
going
to
use
that
information
to
feed
to
some
kind
of
piece
of
software
to
authenticate
and
what
we
could
do.
So
in
simple
case,
you
know,
user
X
takes
their
local
queue,
config
file
that
they
know
works
and
they
stick
it
in
the
cluster.
D
They
know
that,
with
the
version
of
the
tool
that
they
used
it
with
it
works
and
now
they
can
go
in
instead
of
you
know,
if
they
can
go
to
a
different
machine
and
as
long
as
they
have
access
to
the
cluster
registry,
they
can
go
and
pull
that
cube.
Config
feed
it
to
the
same
version
of
the
tool,
and
it
should
do
the
same
thing
in
a
slightly
different
case.
D
An
administrator
might
say:
I
generate
all
the
cube
configs
for
all
the
users
of
my
cluster
and
I,
put
them
in
secrets
in
their
namespaces
and
I
tell
them
to
go
and
fetch
them
there
and
use
the
you
know
to
XYZ
to
use
those
to
access
the
clusters
and
that
doesn't
force.
So
putting
aside
that
the
multiple
credentials
in
a
single
cube,
config
file
for
the
moment,
because
I
think
that
is
problematic,
but
forgetting
about
that.
D
Assuming
that,
assuming
that
there
was
only
one
of
those
clusters
in
any
given
cube
config
file,
we
could
put
the
onus
on
the
person
creating
the
secret,
which
contains
the
cube
config
to
make
sure
that
it
is
conformant
with
whatever
they
plan
to
use
this
credential
for
and
be
less
prescriptive.
It's
sort
of
the
opposite
approach
to
what
Christian
was
saying,
which
is
bolted
down
very
tightly
and
say
that
it
has
to
be
this
or
that
or
the
next
thing,
and
it's
I'm
just
proposing
it
as
an
alternative
way
of
looking
at
the
problem.
D
A
D
Think
you
could
I
can
see
both
sides.
The
argument
just
to
be
clear,
I'm
I'm
not
actually
a
strongly
opposed
to
your
suggestion,
I'm
just
thinking
there
is
this
other
way
of
looking
at
it.
I
mean
that
there's
a
thing
called
a
conflict
map,
and
you
know
it's
it's
pretty
loosey-goosey
and
whoever,
let's
that
config
map
determines
what
the
keys
are
and
what
information
needs
to
be
in
there
for
a
given
use
case
for
that
config
map
to
be
usable,
and
we
could
take
a
similar
approach
here.
E
If
I
were
to,
if
I
were
to
build
tooling
at
dentists
that
use
this
API,
but
if
I
had
like
almost
no
guarantees
or
constraints
as
to
what
the
heck
the
data
would
look
like,
it's
really
hard
to
build
tooling
around
that
broken
source
tooling.
For
that
matter
and
kind
of
like
that's
one
of
the
reasons
I
want
there
to
be
defined.
Api
objects,
as
opposed
to
like
config
Maps,
have
a
config
method
as
a
JSON
blob
that
have
whatever
you
that
you
want.
But
then
it's
entirely
the
onus
on
the
the
operator.
B
We're
at
time
now
so
I
think
we
should
call
this
conversation
here
and
continue.
This
you
know,
at
least
on
github
sounds
like
it
might
be
useful
to
have
like
a
brainstorming
session
about
what
the
right
way
to
handle
these
various
concerns
are
but
I
thank
everybody
for
participating
in
the
discussion
today.
This
was
very
useful.
Thanks.