►
From YouTube: Kubernetes SIG Multicluster 20171024
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
I
think
Eric
is
here
today,
so
there
any
questions
or
concerns
about
cluster
registry,
all
that
people
had
we,
we
can
discuss
them
and
then
I
think
we
can
discuss
the
cluster
registry
project
plan,
which
was
put
out
last
week
or
late
before
I.
Don't
quite
remember
and
make
sure
that
nobody
has
if
there
are
any
concerns
or
major
issues
with
it,
that
we
can
at
least
know
about
them
in
this
meeting
and
if
not,
then
we
can
move
forward
with
that
plan
or
concretely.
D
Yes,
I
can
chat
a
little
bit
about
some
of
the
the
cluster
off
or
the
cluster
registry
off
use
cases
that
at
least
thinking
about
and
discussing,
then
people
missing,
I
did
drop
a
link
in
the
gender
notes
describing
a
little
bit
of
a
proposal
that
we
might
explore.
I,
unfortunately,
zoom
doesn't
support
Weiland,
so
I
can't
share
my
screen.
I
would
love
to,
but
that
link
is
definitely
there
and
I
can
go
for
a
little
bit
about
it.
So,
basically
that
because
of
the
cluster
registry,
awesome
Oh.
D
Basically,
because
the
cluster
registry
is
such
a
convenient
spot
with
lots
of
juicy
metadata
about
where
your
clusters
live,
what
off
providers
they
might
or
how
to
log
into
their
cluster
I
think
it
is
the
really
good
indicator
for
possibly
having
some
expediate
'add
methods
in
coops
ETL
to
understand
this
format,
to
read
it
and
bootstrap
coop
config
is
very
in
a
very
way.
A
really
good
example
is
just
the
fact
that
the
clusters
list
has
all
of
the
information
you
would
need
to
at
least
put
the
configure.
D
D
Take
that
information
and
put
it
into
a
computer,
so
I
think
the
biggest
thing
that
I
would
argue
is
that
if
we
call
the
registry-
and
it
becomes
finding
more
stabilization
through
Vedas
and
and
and
so
on
and
so
forth-
that
it
would
be
a
very
interesting
idea
to
have
coops
ETL,
understand
this
natively
and
and
download
them
to
be
able
to
download
that
information
for
the
user.
I
think
where
it
gets
kind
of
both
more
interesting
and
a
little
bit
more
complicated
is
with
this
authentication
info
object
that
we
have
in
the
cluster
registry.
D
D
This
can
be
really
useful
for
things
like
okay,
I
want
to
know
how
to
get
a
bearer
token,
to
talk
to
this,
to
talk
to
the
kubernetes
api
or
knowing
what
particular
authentication
or
how
to
verify
a
token,
possibly
if
you're,
something
like
the
dashboard,
and
in
this
I
have
a
little
bit
of
a
strongman
talking
a
little
bit
about
how
you
might
encode
something
like
an
oauth2
device
flow.
So
if
whoever's
sharing
can
scroll
down
a
little
bit,
the
this
cluster
info
has
this
off
info
information.
D
D
Do
some
sort
of
Oh
F
dance
with
that
website,
login
to
the
provider
using
a
corporate
identity
or
something
like
your
github
paucity
and
then
be
authenticated
against
kubernetes
instantly
or
have
coups
ETL,
be
authenticated
against
kubernetes
instantly
and
bootstrapping
very,
very
quickly.
The
little
strong
and
I
put
down
here
at
the
bottom
that's
being
displayed
is
the
coop
CTL
support,
so
it
could.
D
We
could
very
sort
of
quickly
replicate
things
like
Covidien,
join
or
or
OC
login,
that
sort
of
replicate
the
sub-problem
taking
something
that
has
only
bootstrapping
credentials
and
turning
that
into
real
credentials
and
try
to
replicate
that
where
we
point
coop
CTL
out
of
cluster
registry
and
then
have
it
bootstrap
all
of
the
information
that
needs
to
authenticate
the
user
against
the
api's
and
actually
gain
credentials
for
that
user.
On
that
user's
behalf,
so
there
are
some
problems.
D
I
think
that
I've
tried
to
outline
a
little
bit
and
this
basically
it's
going
to
be
difficult
for
us
to
maybe
maybe
talk
about
that.
There's
actually
a
question
right
here
that
is:
have
I
looked
into
plug-in
mechanisms
to
coop
CTL,
yet
so
I
think
that
plugins
would
be
a
good
way
of
prototyping
these
type
of
things.
The
the
problem
is
that
if
you're
going
to
an
additional
binary
with
coop
CTL,
you
largely
might
as
well
just
wrap
coop
CTL
and
do
these
logins
yourself.
D
So
I
think
that
there
are
a
lot
of
sort
of
proprietary
mechanisms
like
people
who
not
invented
but
sort
of
have
their
own
internal
standards
for
how
to
log
in
maybe
they're
using
a
Yubikey.
Maybe
something
else,
and
they
have
specific
endpoints
that
implement
custom
API
is
for
gaining
other
credentials.
I
think
that
those
would
be
hard-pressed
to
put
into
Scoob
something
like
coop
CTO
and
a
plugin
mechanism,
or
a
wrapper
around
coops
ETL
would
be
beneficial.
D
I
think
that
the
I
think
that
there
is
the
value
in
trying
to
create
more
endorsed
upstream
mechanisms
that
are
standards
that
are,
you
know,
based
on
rfcs
that
are
public
and
agreed
upon
that
express
how
to
quickly
bootstrap
a
generic
coop
CTO.
In
that
way,
we
can
start
having
more
identity
providers,
sort
of
aim
for
that
and
I
guess.
That's.
That
is
a
little
bit
more
contentious
and
I.
D
Don't
want
to
distract
from
that
because
I'll
probably
go
have
this
complication
and
say
well,
there
will
be
lots
and
lots
of
feedback,
but
this
is
the
basic
sort
of
premise
of
what
I
see
the
auth
info
representing
and
how
it
potentially
could
be
used.
So
yeah
I
guess
that's
that's!
That's
all
I
got
for
for
now
and
I.
E
As
far
as
I
know,
the
goal
is
to
eliminate
code
from
cube
CTL
and
not
to
add
references
to
things
that
aren't
in
main
kubernetes
I
could
be
wrong.
I
think
it
might
be
worth
a
discussion
with
six
CLI
yeah
I
think
this
is
probably
a
new
case
that
I
haven't
considered
something
of
this
form
before
so
it
might
be
worth
just.
D
F
The
other
thing
to
consider
is
that
implementing
something
as
a
plug-in
doesn't
mean
it
doesn't
have
to
ship
with
cue
kettle,
like
that's
kind
of
a
release,
thing
versus
a
development
thing,
so
maybe
something
could
be
developed.
A
plug-in
can
happen
in
cluster
registry.
That
is
part
of
the
release
process.
It
can
be
bundled
with
cube
kettle
and
distributed
that
way.
Yeah
yeah.
D
G
D
G
We
can
move
on,
I
mean
if
there
isn't
nothing
specific
beyond
this
me
I
I
personally,
like
this
you're
suggesting,
and
what
I
understand
that
cluster
registry
seems
to
be
one
point
stuff
which
might
be
a
place
to
discover
questions
and
all
that
stuff.
So
it
makes
sense
to
have
a
native
support
in
capacity
as
a
as
Eric
mentioned
might
be
for
reference
for
everybody
else,
but
I'm
not
sure
if
that's
the
best
place
to
do
it.
That's
this
I
have
my
opinions
about
this.
C
Yeah,
thank
you
very
much.
Eric
I
really
like
the
idea
too.
I
think
it
sounds
pretty
interesting,
but
it
definitely
does
sound
like
it
needs
to
be
taken
up
with
sig
CLI
to
get
some
sense
on
their
side.
If
this
is
something
that
they
would
be
an
in
favor
of
and
how
they
would
recommend
implementing
it.
G
H
C
C
C
Obviously,
a
lot
of
details
here
are
left
out
in
particulars,
but
I
want
to
to
have
a
sort
of
a
timeline
and
milestones
for
an
alpha-beta
and
a
sort
of
a
stable
release,
cluster
registry
and
define
what
things
went
into
those
so
down
here
at
the
bottom.
You
can
see
particularly
address
this,
the
milestones
and
timelines
and
the
goals
for
each
of
those
yeah.
C
Do
you
have
an
ID
like
an
alpha
to
be
out
this
quarter,
which
would
in
essence
be
a
functional,
API
server,
a
functional
cluster
registry
API
server
with
a
an
alpha
API
that
we've
all
agreed
upon
enough
documentation
and
ready
for
people
to
start
trying
out
and
with
the
results
of
that,
we
can
hopefully
then
move
the
API
into
beta
write.
Some
more
tests
have
better
documentation
and
once
I
think
it's
in
a
beta
state.
C
Most
of
the
work
done
to
make
it
stable
should
be
publishing
the
dock,
making
sure
that
we
understand
all
the
particulars
around
having
good
documentation,
making
sure
that
the
release
process
is
very
well
set
up,
making
sure
that
the
ways
that
users
are
supposed
to
install
it
are
very
well
understood
and
very
well
set
up.
I
think
there's
been
not
I,
haven't
gotten
many
much
contention
in
the
comments
here.
E
I
mean
that
I
think
our
goal,
or
at
least
I,
have
a
couple
of
things.
I
want
to
bring
out
but
go
ahead.
I
didn't
mean
to
interrupt.
A
You
know
I
think
we're
just
asking
for
what
people
did
want
to
bring
up
we're.
You
know
we
we
want
to
just
keep
barreling
forward,
and
you
know
we,
you
don't
see
this
as
a
always
have
is
that's
except
for
the
author
corner.
It
looks
like
a
lot
of
this
is
straightforward.
So
let's
get
the
straightforward
pieces
behind
us
as
soon
as
possible.
Mmm-Hmm.
C
That
is
a
good
point
and
I
think
one
of
the
goals
of
this
is
when
this
is
when.
If
this
is
comes
out
of
this
and
is
a
the
sig
agrees
with
this
plan,
I
intend
to
start
making
more
milestones
and
prioritizing
issues
into
this
to
hopefully
make
it
easier
for
people
to
come
in
and
actually
contribute,
because
I
know
that
right
now
the
repository
there's
sort
of
a
list
of
issues
that
don't
have
much
prioritization
or
context,
but
I
think
with
milestones
in
place.
G
Yep
that
sounds
so
I
I
did
go
through
the
readme
that
you
have
published
at
your
place.
Nozzle
currently
I
see
that
there
are
three
contributors
over
there
right.
Mm-Hmm
keep
you
black
to
be
working
on
that
yeah.
So
there
are.
I
C
I
A
I
C
So
the
current,
so
the
way
that
it
is
deployed
today
is
via
the
the
C
Arnott
tool
that
is
intended
to
be
a
way
that
you
could
deploy
it
to
production
if
you
so
desire.
But
it's
not
intended
to
be
the
canonical
or
only
way
to
deploy
it.
The
cluster
registry
is
meant
to
also
be
usable
as
an
aggregated,
API
server
and,
to
so
be
able
to
use
the
at
CD
instance.
C
C
F
Find
it
unlikely
the
production
scenario
would
deploy
separate
at
CD
instances
per
API,
simply
because
the
costs
of
having
to
ensure
that
CD
reliability
like
the
number
of
nodes
you
require.
That
seems
really
onerous.
If
you
have
n
times
the
number
of
AP
is,
as
you
tend
to
use
like
just
a
like
a
prefix
and
use
the
same
server.
It.
F
I
mean
when
you're
configuring
it,
you
typically
use
a
prefix
right.
The
prefix
governs
everything
that
happens.
So
it's
it's.
It's
kind
of
you're
right.
It's
not
super
secure,
but
I
mean
if
you're
deploying
these
servers.
Presumably
you
don't
know
you
trust,
they're
not
going
to
access
prefixes
that
they're
not
using
so.
A
But,
like
I
said,
I
think
I
agree
with
everything
you
just
said
about
the
cost.
With
the
item
that
you
know
I
mean
the
nice
thing
about
this
is
we
can
let
the
users
decide
how
they
wanted
a
point
decoupled
from
the
implementation
of
the
cluster
registry,
but
I
can
see
things
going.
The
other
way.
That's
all
Etsy
DS
is
still
in
my
mind,
a
little
young
in
compared
to
just
normal
key
value
in
restoring
am.
A
C
Yeah,
so
there's
no
intention
that
the
cluster
registry,
with
its
own
Etsy
D,
is
the
only
way
to
deploy
it.
I
guess,
I
think
on
my
side,
I
didn't
necessarily
know
what
people's
requirements
would
be
for
using
the
cluster
registry,
so
I
just
hadn't
gone
and
defined
a
deployment
necessary
deployment
strategies
other
than
using
the
kind
of
repurposing
the
already
existing
tool
as
a
bootstrap
thing
that
gets
them
to
get
started
with
testing
and
to
have
something
that
people
could
use
to
start
playing
with
it.
C
But
I
think
that
is
something
that
will
come
up
like
definitely
in
a
beta
ga
of
stable
phase
having
documentation
around
what
three
actual
requirements
are
for
deploying
a
cluster
registry
potentially
having
helm
charts
that
let
you
do
different
kinds
of
deployments
and
just
saying,
like
here's,
the
container
image.
This
is
what
it
requires
set
it
up,
how
you
want
I,
think
kind
of
a
gradation
of
anything
from
like
run
this
command
and
you'll
have
one
to
play
with
here's.
The
container
image
go,
configure
it
how
you
see
fit.
It
seems
reasonable
to
me.
I
C
I
I
I
So
we're
also
discussing
the
possibility
of
sort
of
the
need
for
the
Sierra
net
tool
versus
like
just
a
helmet
art
for
deployment
of
the
cluster
registry.
I
know
that
we're
kind
of
discussing
it
in
the
dock
a
little
bit
already.
It
does
seem
like
the
perhaps
the
original
reasons
to
have
a
binary
deployed.
The
cluster
registry,
similar
to
the
Federation
control
plane,
seems
to
be
things
that,
like
the
existing
helm,
2.7
does
support
so
home
2.7
was
released
yesterday
and
it
looks
like
there's
support
for
generating
TLS
certificates
and
keys.
I
Actually
within
the
chart
itself.
It
does
also
look
like
the
service.
Catalog
deploys
the
deploys
the
API
server
using
this
mechanism,
or
it
doesn't
use
the
existing
functions
yet,
but
it
does
deploy
using
the
the
service
hosts
specified
in
the
TLS
certificate.
Signing
request
so
I
think
that's
something
we
should
look
at
I
think
there's
enough
support
in
the
existing
helm
to
possibly
move
away
from
using
a
binary
only
because
it
does
add
extra
work
to
actually
maintain
the
binary
and
deploy
it
and
move
it
along.
I
C
C
My
concern
is
that
I,
don't
necessarily
know
how
much
I
I
don't
know
if
having
helm
charts
creates
extra
if
having
held
charts,
creates
extra
work
to
do
for
somebody
who
wants
to
just
try
it
out
if
installing
like
now,
they
have
to
install
helm
in
their
cluster
and
that
as
well,
if
that
creates
an
extra
burden
for
the
court
of
just
tried
and
go
case,
if
it's
like,
oh
now,
you
have
to
do
seven
other
things
before
versus
instead
of
running
a
command.
That,
for
me,
is
a
bit
disconcerting,
I,
don't
but
I
imagine.
C
This
is
a
problem
that
other
tools
have
solved
in
a
reasonable
way
since
I
know
how
is
quite
commonly
used,
so
yeah
I
think
it's
worth
evaluating
if
it
can't
actually
replace
the
tool
because
I
agree,
I,
don't
really
want
to
have
to
maintain
a
separate
tool.
I
think
it's
unfortunate.
It's
a
decent
amount
of
extra
work
to
have
to
keep
that
tool
and
test
that
tool,
and
it
would
be
I
meant
I
would
be.
C
I
would
definitely
like
to
push
that
work
on
so
mostly
having
helm
doing
their
testing
and
just
writing
a
chart
which
is
declarative,
yeah
I
think
it's
something
we
will
have
to
look
in
more
and
I
noticed
Paul
said
he
has.
They
have
a
PR
to
move
the
helm
onto
the
helm,
functionality
I
think
when
that's
ready
definitely
share
it.
So
we
can
evaluate
it
and
look
at
it.
F
C
E
C
E
I
think
that's
a
really
good
question.
I
am
NOT
certain
off
the
top
of
my
head,
whether
the
like
sprig,
a
library
that
helm
is
using
supports,
injecting
your
own
CA
to
do
the
signing
lift.
That's
that's
a
really
good
question.
I
believe
that
the
pull
request
and
service
catalog
is
just
doing
a
it's.
It's
using
a
self.
E
And
then
the
so
for
clarity,
the
the
problem
space
is
a
little
simpler
in
service
catalog
because
we
basically
just
have
to
generate
the
certs
that
the
API
aggregator
will
then
use
to
talk
to
service
catalog,
API
server
and
in
the
API
service
resource
that
you
create
that
the
aggregator
knows
about
you
can
just
put
the
see
a
bundle
to
use.
If,
for
example,
you
have
a
self
signed
certificate
or
you
have
a
you,
you
need
to
use
a
bundle.
It's
not
in
the
system,
trust
store.
So
it
seems
like
the
problem.
E
Space
is
a
little
simpler.
There
I
don't
know
off
the
top
of
my
head.
If
you
can
inject
your
own
CA
to
use
and
probably
there's
scenarios
where
you
have
your
own,
like
PKI,
that
this
wouldn't
work
with,
so
it's
it's
I
think
worth
evaluating
exactly
what
the
capabilities
are
and
versus.
What's
in
CR
in
it,
that's
all.
C
I
We
we
have
a
an
existing
issue
that
we
have
somebody
looking
at
right
now.
It
does
seem
like
the
ex
no
load
balance
right.
P
could
be
an
issue,
although
Service
Catalog
uses
the
built-in
kubernetes
dns
that
gets
deployed
for
that
service,
and
that
seems
to
be
handling
it
appropriately.
So
perhaps
we
can
just
rely
on
the
DNS
capability
there
and
as
far
as
providing
your
own
certs,
we
can
probably
use
variables
as
well
and
you
can
pass
those
in
and
override
kind
of
with,
whatever
the
chart
templates
specify.
E
The
I
think
the
upon
further
consideration,
since
we
last
discussed
this
I
do
think.
Service
Catalog
problem
spaces
is
simpler
because
TLS
terminates
at
the
aggregator
and
then
like
TLS,
so
TLS
between
aggregator
and
Service
Catalog
API
server
terminates
at
the
aggregator,
and
then
you
use
your
own
TLS,
like
whatever
TLS
the
aggregator
uses
for
its
serving
certificate
to
talk
to
it
so
potentially
simpler
problems
based
now
that
I'm
thinking
further
about
it
is.
A
This
something
that
we
should
maybe
send
to
a
thread
and
start
a
discussion
there
so
eventually
have
the
written
word
and
make
sure
that
we
agree
on
what
the
problem
is
a
good
solution.
This
feels
to
me
something
that
you
know
somebody
should
maybe
take
the
ownership
and
run
down.
You
know
in
in
a
written
form.
C
C
So
I'll
send
out
a
thread
to
to
the
the
sig
and
see
seeing
relevant
people
later
sort
of
to
start
kick
off
this
discuss.
I
can
also
do
this
in
a
PR.
We
can
do
this
in
an
issue
in
the
repository.
If
people
are
okay
with
that,
I
think
there
already
is
an
issue
all
I
can
assign
that
issue,
particularly
I
guess
that
issue
was
already
a
no
I
guess
it's
not
assigned
because
of
kubernetes
repository
membership.
I'll
send
out
something
to
try
and
of
try
and
drive
this.
G
C
My
initial
working
plan
was
to
not
check
in
vendor
dependencies
I
until
I.
Think
there's
a
I,
don't
know
what
I
don't
know
what
the
requirements
are
for
somebody
who
would
necessarily
be
ven
during
in
the
cluster
registry.
My
experience
it
seems
like
having
vendor
dependencies
checked
in
makes
it
more
difficult
for
other
things
to
vendor
you
in
and
it's
not
true.
It's
not
true.
Okay,
no.
F
The
van
during
I
mean
you
use
a
good
ven
during
tool
like
we're,
using
I'm
using
glide,
sorry
we're
using
wide
for
kubernetes
Federation
and
it's
simply
a
matter
of
passing
the
right
flag
and
it'll
strip
the
dependencies
that
have
been
vendored
and
it
will
sort
of
Reve
enter
them
at
the
root
level
and
make
sure
that
there's
coherency
across
all
the
things
you're,
rendering
so
tool.
Support
for
the
wind
I
mean.
D
F
A
good
point,
so
I'm
I'm
liking
basil,
more
than
maybe
I
did
in
the
past,
but
one
of
the
challenges
with
going
with
basil
only
is
it
forces
everybody
else
to
also
use
basil,
and
maybe
somebody
wants
to
vendor
cluster
registry-
isn't
gonna,
be
using
bezels
their
primary
mechanism,
but
doesn't
using
glide
then
for
somebody
to
use
glide
as
well.
No
because
I
mean
all
the
vending
tools
basically
have
support
for
all
the
other
been
during
tools.
F
A
Know
what
I
think
that
at
least
Maru
is
there
anyone
who
feels
comfortable
with
understanding
the
state
of
the
art
in
kubernetes
with
vendor
rain
and
depths
in
general,
because
I
know
that
when
I
joined
the
group
two
years
ago
it
was
complicated
and
I
learned
a
lot
about
it.
I
know
things
have
really
moved
on
since
then
and
I
don't
feel
at
least
I,
don't
think
the
Googlers,
at
least
in
the
you
know.
The
multi
cluster
team
here
have
a
lot
of
deep
expertise,
or
do
you
Jonathan,
so
I
have
been.
C
Talking
internally
to
the
people
who
are
in
the
in
sort
of
I've,
been
talking
to
Jeff
and
other
people
who
deal
with
sort
of
API
machinery
and
and
repository
and
sort
of
like
infrastructure,
the
impression
I
got
was
that
the
goal
is
just
not
to
I
guess:
I
have
not
explicitly
asked
about
Venn
during
so
I
think
I
can
follow
up
with
them.
What
I
wanted
to
do
was
I
wanted
to
not
go
with
the
current
state
of
things.
C
G
K
G
C
And
I
think
the
way
I
one
of
the
ways
I
want
to
answer.
That
is
to
ask
the
people
who
are
working
on
kubernetes
sort
of
how
kubernetes
depends
on
other
code,
whether
they
intend
to
continue
to
vendor
in
other
things
into
the
tree
or
whether
they
intend
to
move
to
a
mechanism
where
they're
not
depending
on
checked-in
vendor
dependencies.
So.
G
I
can
I
can
tell
you
couple
of
couple
of
things
that
are
sort
of
ongoing.
One
is
one
is
that
this
vending
tools
that
we
were
talking
about?
They
are
transient.
For
example,
Moreau
mentioned
that
currently
the
best
solution
or
best
choice
across
different
repos
is
glite,
but
maybe
six
months
down
the
line
or
sometime
in
future
it.
It
might
not
be
so
the
same.
G
Also
because
there
are
multiple
repos
and
there
are,
there
is
code
being
moved
out
or
reorganize
or
whatever.
There
is
a
set
of
dependencies
which
most
of
the
reports
might
want
to
depend
on,
which
is
common
across
multiple
repose
that
is
not
answered
yet,
and
I
have
seen
multiple
open
issues
in
kubernetes
itself,
which
are
trying
to
answer
all
these
questions,
but
are
all
transient
like.
So
you
have
to
answer
for
now
what
you
want
to
do
so.
C
C
F
Say
that
there
is
complication
in
the
near
term
of
following
that,
like
I've
had
to
jump
through
hoops
to
get
open,
API
code
generated
for
vendored
cube,
because
that
you
know
they
decided
not
to
check
it
in
and
cube,
which
is
fine
because
it
was
causing
lots
of
merge
conflicts.
But
then,
if
your
vendor
in
cube,
you
need
to
be
able
to
generate.
You
have
to
have
the
same
sort
of
mechanisms
in
place
to
generate
vendor
to
non
vendor.
Nobody
has
done
that.
F
Work
for
open,
API,
I
guarantee
nobody's
done
that
work
for
client
Jen,
because
it's
still
generated
in
queue.
So
I'm
saying
another
thing:
you
can't
do
it
I'm,
saying
there'll,
be
work
involved
to
make
sure
that
you
can
generate
within
cluster
registries
Basile
build
and
also
enable
that
same
code
to
be
generated
and
consistently
from
a
project.
That's
vendor
in
cluster
registry.
C
F
C
F
It
is
and
I,
and
so
one
of
the
things
that
we
touched
on
and
Ivan
and
Paul
feel
free
to
chime
in
puzzles
great.
As
long
as
everything
is
just
puzzle
and
it's
in
some
ways,
it's
like
it's
a
good
thing,
so
it
has
all
kinds
of
advantages
in
terms
of
how
it
enables
you
to
manage
a
dependency
graph,
but
it
means
anyone
who's
consuming.
It
is
suddenly
on
the
hook
for
using
basil
as
well,
and
so
it's
kind
of
viral
in
the
same
way
that,
like
GPL,
is
viral
and
I.
F
Don't
think
that
the
community
as
a
whole
is
really
considered
the
implication
of
that,
it's
just
kind
of
being,
because
it's
not
the
first
class
build
system
for
caking,
yeah
kind
of
in
the
process
of
becoming
so
well.
Just
I
think
that
we're
gonna
have
to
consider
that,
like
we
have
some
concern
with
in
Red
Hat
about
being
able
to
consume
a
project
that
uses
basil
build
if
internally,
the
way
that
we
build
our
PMS
is
incompatible
with
basel,
build
at
least
not
initially,
and
so
that
would
mean
that
it
would.
F
F
G
Yeah
I
think
if
I
stop
so
Jonathan,
you
want
to
share
the
notes
once
again,
I
mean
in
place
of
the
dog
that
you
are
sharing
or
let
me
share.
I
just
need
to
give
an
update
about
the
progress
on
the
move
out
of
addition
move
out,
and
there
is
one
point
I
want
to.
G
There's
one
point:
I
want
to
I
mean
raise
help
on
which
is
which
is
moving
the
issues.
We
have
approximately
some
160
plus
issues
which
are
against
sig
multi-gesture.
So
how
exactly
we
triage
them
move
them
to
to
say
the
new,
rapid,
racial
and
I
believe
none
of
them
are
against
cluster
registry.
Am
I
right?
Jonathan
I
mean
it.
You
might
be
able
to
clearly
say
that.
G
K
Yeah
I
have
a
suggestion,
sir,
so
I
think
that
two
separate
issues
here,
one
is
the
mechanics
of
you-
know
exactly
where
these
issues
live
right.
Now
they
live
in
the
core
Ipoh
and
presumably
logically,
they
should
all
be
moved
out.
But
then
there
is
a
somewhat
with
oracle
issue,
which
is
you
know?
How
do
we
actually
approach
these
things
like
who
gets
who
works
on
them?
K
How
do
we
prioritize
them,
etc,
and
so
I
I've
done
a
fair
bit
towards
the
latter
recently,
in
that
I
have
taken
all
of
the
backlog
and
paralyzed
them
and
made
sure
that
the
assignments
are
sane.
So
I
think
that
we
should
just
continue
with
that
and
then
in
parallel
figure
out
when
and
how
to
move
them
to
the
new
Reaper
and
I
suggest
that
we
just
make
that
part
of
the
repo
move
project
you
guys
have
them
standing
there
do
the
mechanics
that
way.
There's.
G
G
F
K
Yeah
I'm
gonna
be
honest,
don't
agree
with
that!
Maru
I!
Think
there's
you
know
things
like
triaging
the
issues
and
reproducing
bugs
and
doing
things
like
that.
Don't
do
not
have
to
wait
for
code
moves
and
I
really
don't
want
us
to,
like
you
know,
bring
the
world
to
a
halt
and
they're
already
at
a
halt.
K
F
G
Okay,
okay,
so
what
I
can
say
is
we
will
defer
this
and
maybe
maybe
check
again
next
meeting
or
next
next
meeting
onwards
and
update
about
that
move
out
is
that
we
are
almost
there
and
we
should
be
able
to
probably
cut
over
this
week.
Sometime
I
have
updated
a
short
status
in
the
notes,
so
there
are
no
questions
on
that.
I,
don't
really
need
to
I
mean
update
for
them.
I
mean.
F
F
Right
now,
there's
this
there's
some
there's
PRS
that
are
posted
for
KK
and
for
some
supporting
repo
QA,
open
API,
but
I
want
to
see
merge
before
we
cut
over
there
there.
What
enabling
us
to
generate
open
API
on
a
tree
once
those
merge.
If
the
dns
PR
is
not
merged,
I'm
I'm,
just
gonna
say
he
has
to
target
the
new
repo,
but
then
it'll
be
a
matter
of
just
doing
the
work
to
post
the
removal.
F
I'll
merge
the
removal,
PR
revenger
rese,
pin
the
new
PR
for
creating
a
code
in
the
new
repo
and
then
we'll
be
go.
So
hopefully
that
will
happen
Thursday
at
the
latest
and
obviously
once
that's
done.
We
can
send
it
an
announcement
to
kubernetes
to
have
an
sig
multi
cluster,
so
people
know
that
it's
happened,
but
no
new
code
into
the
tree
for
Federation
and
probably
will
just
close
all
the
existing
PRS
with
notification
as
to
what
they
actually
need
to
take.
G
G
B
J
J
C
Apologies
we're
having
some
weird
issues,
we're
having
some
issues
on
our
side
with
our
speakerphone,
so
we
were
not
audible,
I!
Think
for
the
last
ten
minutes,
oh
okay,
who
you
asked
about
that
you
asked
earlier
about
cluster
Registry
issues.
I,
don't
think
there
are
any
in
k,
/
K,
but
if
there
are
just
send
them
my
way
and
I
will
figure
out
how
to
migrate
them
into
the
cluster
registry.
Repost,
specifically,
oh.