►
From YouTube: Kubernetes SIG Multicluster 20171107
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
C
A
C
B
A
E
E
F
A
G
Okay,
so
what
I
want
to
present
is
a
command-line
tool
which
we
are
calling
cube
mci
and
we
and
what
it
does
is
it
helps
you
manage
multi
cluster
in
VSS,
so
it's
similar
to
federated
English's,
but
without
Federation.
So
you
don't
need
to
manage
a
whole
control,
plane
and
register
clusters
to
it.
It's
a
simple
command-line
tool
that
you
can
run
anywhere
and
configure
multi
cluster
gracias.
You
don't
need
long-running
controller
for
this,
and
so
the
way
it
works
is.
G
You
can
get
status
of
an
existing
load
balancer
and
in
future
me
we
might
add
other
commands
as
well
and
you'll
see
for
a
list
of
clusters.
It
takes
like
a
cluster
ml,
which
is
a
simple
cube
conflict.
So
you
give
up
you
config,
and
it
will
extract
all
the
context
from
that
cube
country.
But
if
you
chub
you
also
on
cluster
Registry
time
first,
you
just
point
it
to
the
glossary
sphere
and
a
list
of
clusters,
and
it
will
pick
the
list
of
clusters
from
there
so
and
the
code.
G
It's
open
source,
it's
on
github
and
here
are
the
links
and
I'll
also
add
them
to
the
agenda.
So
you
can
go.
Look
at
the
code
as
well
and
I
also
demo
how
it
works.
Everyone,
quick
demo,
I'm
using
a
demo
script
by
him,
so
that
I
don't
have
to
write
a
lot
while
I'm
doing
so.
The
setup
I
have
is
I,
have
two
twisters
one
in
Europe
and
one
in
US
and
here
I'm
listing
context
for
my
cue
contract.
G
This
is
the
cute
conflict
that
I'll
pass
to
Cubans
here
and
to
start
with
I
have
a
zone
printer
app,
that's
in
the
examples
in
Cuba
mci
repository.
That's
the
example
I'm
using
for
this
demo.
What
it
does
is
it
prints
the
zone
in
which
it's
stunning
so
I
create
this
app
in
both
my
clusters,
so
first
I
created
in
US
and
now
I
created
in
Europe.
So
this
way
I
have
the
app
running
in
both
those
clusters
and
once
I
have
that
now,
I
can
start
using
Cuban
CR
you
to
configure
my
multi
clustering.
G
This,
and
this
is
how
Cuban
CI
looks
this
just
help
command
for
Cuban
scale.
These
are
the
commands
it
supports
in
the
version,
and
these
are
the
flags
it
has
and
an
example.
Cube.
Mc
I
create
command.
Is
this
so
you
run
cube,
MC
I
create
so
in
printer.
It
B
is
the
name
of
my
load,
balancer
I,
pass
it
the
English
spec
and
my
GCB
project
and
a
cube
concept.
G
So
this
cube
configures,
both
those
less
than
0
and
yes
and
I
know
the
output
is
pretty
noisy,
like
it
prints
out
what
all
it's
doing,
and
so
we
might
support
like
quiet
mode
as
well.
But
it
brings
you
what
all
it's
doing,
it's
creating
the
back
in
service
and
checks,
everything
so
now,
once
it's
created
all
these
resources
in
GCL
B.
What
GCL
B
is
doing
in
the
background
else
like
it
has
configured
the
hell
checks
that
they
should
go.
Do
all
these
instances.
Do
you
see?
G
So
you
that
we
have
the
adjacent
of
Israel,
and
so
it
takes
a
while
for
these
hell
checks
to
succeed
and
we
can
for
not
Sofia.
This
also
supports
like
get
status,
so
you
can
get
the
status
of
the
load
balancer.
It
will
bring
the
IP
address
and
what
our
clusters
is
it
spec,
if
you
want
to
look
at
it
later,
so
this
IP
address
eventually,
once
everything
works,
they
should
succeed,
but
it
takes
some
time.
H
G
H
Ok,
so
the
current
the
current
in
goes
controller
I,
mean
just
thinking
back
to
the
implementation
Federation.
It
basically
left
everything
to
the
in
cluster
ingress
controller,
the
in
the
first
cluster,
and
then
you
know
once
it
had
discovered
that
the
load,
balancer
etcetera
had
all
been
set
up
it
just
applied.
Those
to
the
other
clusters
is
that
is
that
different
now,
so
this
is
where
the
same
or
differently.
It's.
G
H
G
G
G
F
A
G
E
G
E
Said
in
your
intro
Nikhil
is
that
we
a
lot
of
the
Google
support
tickets
that
we
were
fielding
in
the
four
Federation
had
to
do
with
users
who
wanted
to
make
use
of
GCL
B
because
they
were
using
it
in
other
products
that
were
not
non-career
Nettie's.
They
wanted
to
have
a
supported
path
to
use
that
feature
income
day.
So
this
is
kind
of
our
stopgap,
too
old,
to
be
able
to
have
that
mode
supported
in
kubernetes.
G
G
I
G
And
I
can
also
show
you
the
code,
like
it's
all
conservation,
as
you
try
it
out.
So
if
anyone
wants
to
try
it
out
it
specially
with
GCP,
it
will
be
written
like
contact
us
and
we
can
help
you
set
up
and
if
you
try
it
out,
we
would
love
feedback
and
anyone
can
go.
Look
at
the
code
on
github
and
file
issues
and
there's
an
example
which
I
was
using.
All
of
this
is
in
the
repository
as
well.
If
you
want
to
try
it
out
yourself
so.
E
So
I
think
the
idea
Christian
just
walked
out,
probably
drinking
sweat,
his
body
or
something
I
think
that
the
the
use
case
is
obviously
something
for
the
we've
seen
that
has
gotten
a
lot
of
interest
from
a
wide
array
of
customers.
We
sort
of
thought
we
would
have.
You
know
just
a
few
be
interested,
but
we've
kind
of
been
inundated
with
interest.
So
we
think
this
is
definitely
something
that's
worth.
E
Investing
a
lot
more
in
a
command-line
tool
that
you
know
is
doesn't
really
have
a
lot
of
salt
healing
or
monitoring
or
deep
integration
with
the
life
cycle
of
the
system
is
probably
not
the
way
to
go.
You
know
we're
trying
to
figure
out
ourselves,
and
you
know
definitely
bringing
this
to
the
state
to
discuss
to
is.
Maybe
the
right
thing
to
do
is
to
build
a
controller
that
sits
on
top
of
cluster
registry
or
what
you
know
works
with
cluster
registry.
E
Maybe
you
know,
there's
another
demon
or
a
service
of
its
own,
we're
not
really
sure
I
think
the
idea
right
now
first
is
to
deploy
this
see
what
customers
like
or
don't
like
find
out
where
the
mileage
varies
and
then
use
that
to
inform
how
we
want
to
both
refine
the
functionality
itself
and
then
also
of
how
we
want
to
you
know
what
the
deployment
model
should
be.
You
know
the
default
idea,
I
think
for
everything.
Goober
Nettie's
days
are,
let's
just
write
a
controller,
the
life
cycle
of
that
controller
and
then
wear
it.
E
J
So
yeah
I
mean
one
of
my
follow-up
questions
would
be
like.
Is
the
pattern?
Here's
something
that
you
guys
have
sorry
you
folks
have
foreseen
being
like
you
may
have
eventually
flickable
like
hooks
for
do
this
on
GCP.
Do
this
on
another
cloud?
Do
this
on
some
appliance
type
thing:
Wow
I,
don't
know
if
you've
thought
that
far.
Yet
that
would
be
like.
Oh.
E
F
E
Can't
imagine
you
know
any
of
the
solutions
that
we're
doing
in
cig,
multi
cluster.
It
doesn't
make
sense
even
from
like,
let's
just
say,
like
a
Google,
only
business
standpoint
to
lock
in
this
sort
of
functionality
in
between
clusters
and
only
one
cloud.
You
know
it
doesn't
make
sense.
Technically
it
doesn't
make
sense
for
kubernetes,
open
source
philosophy,
and
this
like,
where
we
want
kubernetes
to
be
the
best
answer
in
all
environments
and
they
don't
even
make
good
business.
So
this
is
one
reason
also
why
it's
a
stopgap
we
were
able
to.
E
You
know
what
you
see
here
is
the
result
of,
like
you
know,
nikhil
and
like
one
or
two
other
people,
just
sort
of
banging
really
hard
for
about
three
or
four
weeks.
But
the
idea,
then,
is
one
of
the
next
things
that
we're
also
looking
at,
and
we
don't
want
to
talk
to
the
sick
about
is
how
do
we
expand
this
so
that
it's
not
just
a
google
solution.
J
So
now
I
guess
one
other
thing
I'd
be
curious
about
is
like
what
what
what
is
the
integration
between
this
and
kludge
cluster
registry?
Look
like
like
as
it
as
simple,
as
you
can
say,
use
this
name
of
a
particular
cluster
in
the
cluster
registry.
Instead
of
I
got
plug
this
information
in
I.
Guess
that
changes
if
this
tool,
which
were
to
migrate
to
like
a
resource
and
controller
type
thing
I
just
wondered
since
I
saw
I,
was
like
something
that
had
been
considered
in
the
slide
that
you
showed.
G
So
that,
right
now
it
just
takes
the
list
of
clusters,
which
is
a
cube
conflict
to
integrate
with
less
religious
tree
like
it
would
take
the
endpoint
faultless
registry
and
probably
like
a
label
selector
or
any
query
for
selecting
a
set
of
clusters
from
the
cluster
history.
It
could
be
the
exact
name
as
well
like
you
said
it
could
be.
E
Yeah
I
make
sense.
Also,
you
know
we're
not
really
short.
We're
still
trying
to
figure
this
out
too
right
behind
me
is
Greg
and
Greg
is
gonna,
be
you
know
like
well,
Nikhil
was
focusing
more
on.
You
know,
working
with
customers
trying
to
figure
out
and
users
and
trying
to
figure
out.
You
know
what
their
input
is
and
what
their
feedback
is.
That
will
be
relaying
to
the
cig.
You
know,
Greg
is
somebody
that's
going
to
be
focusing
sort
of
on
what
is
the
next
few
steps?
H
Yeah
thanks
Nikhil
I
had
one
or
two
more
questions.
If
we
have
time
and
sorry
I
got
called
away
on
to
another
call
little
earlier,
so
I
might
have
missed
some
discussion
around
self-healing.
So
so
this
is
like
a
point
in
time,
deployment
right.
So
so,
whether
you
use
labels
or
whether
you
use
cluster
names,
which
I
think
is
what
you
do
at
the
moment.
You
basically
push
this
stuff
to
those
clusters
and
I'm
just
sort
of
thinking
hit.
G
H
Similarly,
if
your
configs,
just
stepping
forward
to
where
you
support
label
selectors
on
clusters,
if
you
deployed
something
saying
you
know,
deploy
it
into
all
European
clusters,
it
would
do
it
at
that
point
in
time
and
then,
if
more
European
clusters
were
added
later,
even
though
your
config
said
deploy
into
all
European
clusters,
it
wouldn't
actually
be
in
all
of
them.
It
would
be
in
all
of
the
ones
that
existed
at
the
point
in
time.
When
you
push
this
thing
out.
Yes,.
G
G
G
I
H
Yeah,
okay,
then
that
answers
my
question.
I
guess
the
second
question
I
had
was:
is
there
any
reason
why
we
couldn't
throw
wouldn't
I
mean
it
seems
like
a
functionality
in
this
tool
is
essentially
the
same
as
what's
in
the
ingress
controller
at
the
moment,
except
for
the
fact
that
this
tool
does
the
load
balancer
creation,
whereas
in
Federation
at
the
moment
it
that's
done
by
the
in
cluster
ingress
controller.
Is
there
any
reason
why
we
wouldn't
take?
G
H
H
C
H
I
E
H
I
understand
so
what
this
does
is
disable
that
in
the
cluster
controllers,
and
and
move
it
to
the
Federation
due
to
this
script
in
this
case-
and
that
was
my
question-
is,
could
we
or
would
we
do
potentially
the
same
thing
in
the
ingress,
federated
ingress,
controller
and
I?
Think
Nikki's
answer
was
yes.
L
Think
the
idea
that
everything
had
to
be
a
sync
controller
is
like
well,
if
I'm
propagating
resources,
sure
but
I
mean
really
it's
just
like
human
Eddie's
I.
Have
you
know
an
API
is
a
source
of
truth
and
I
can
do
whatever
I
want
with
that.
So
you
go
all
this
in
controller.
Wasn't
to
preclude
that
just
to
avoid.
You
know,
duplicating
a
lot
of
code
if
the
same
pattern,
but
if
it's
a
different
pattern,
yeah
do
something
different.
H
Today.
If
you
know
somebody
came
along
and
and
dinked
with
the
configuration
of
the
of
the
load
balancer
that
Nikhil's
tool
it
created,
it
would
basically
be
broken
and
the
sink
controller
is
does
the
same
thing.
It
just
reconciles
so
I'm
not
entirely
sure
what
you
mean
by
moogle
way
from
the
sink
controller,
the.
L
Same
controller
is
it's
just
a
model,
for
you
know
synchronizing
between
multiple
clusters,
but
it's
not
the
only
model.
I
guess
my
concern
would
be.
Is
we
don't
need
to
fit
everything
into
okay,
I
have
a
resource
the
Federation
API
I
have
to
propagate
it
called
member
clusters.
I
could
look
at
the
Federation
API
and
I
can
decide
that
I'm
going
to
configure
a
single
global
load.
Balancer
that's
going
to
target
some
colonies,
works
I!
Don't
need
to
try
to
squeeze
that
into
the
same
controller
model.
It's
just
doing
something
different
I.
L
B
And
it
seem
like
a
lot
of
the
things
that
were
implemented
in
the
Federation
project.
Early
on
were
things
that
followed
a
sync
controller
type
model,
because
those
were
relatively
straightforward
to
implement
more
things
that
had
a
supermodel
with
some
additional
modification
of
the
objects
are
being
synchronized
or
as
things
that
required.
A
more
complicated
logic
were
deferred,
I
think
because
there
were
other
things
that
could
be
done.
B
H
Okay,
I've
just
figured
out
where
the
confusion
was
when
you
were
talking:
matsing
controllers,
you
were
talking
about
controllers
that
stamp
out
identical
things
in
all
clusters
and
they
don't
do
anything
else,
whereas
I
think
I
was
just
talking
about
the
general
concept
of
a
sync
and
reconciling
okay,
but
I.
Think
I
understand
where
the
confusion.
A
E
That's
good
so.
Lastly,
this
is
still
pretty
early
with
us
from
working
on
this.
It's
not
something
that
you
know
it.
It
looks
pretty
tight
because
I
think
Nikhil
went
to
town,
as
did
other
members
of
the
team,
but
it's
they're
still
relatively
new
on
our,
and
so
it's
not
like
something
that
we
were
like
holding
back
for
a
while,
just
to
make
sure
that's
clear.
B
A
Yeah,
so
we
don't
have
any
other
updates
about
CIA
relevant
PR
issues
which
might
be
yeah.
There
is
I.
Think
nikki
has
put
this
up.
Do
we
need
to
talk
about
what
we
need
to
do
in
the
salon
and
the
Kubik
on
or
I
guess,
probably
people
who
are
attending
and
if
some
other
guys
were
not
attending
and
still
would
want
to
present
something
or
something
taken
up
in
this
thing
can
put
it
down
here
this?
This
might
itself
be
okay,
right.
H
H
A
Yeah,
there
is
one
item
later
in
today's
agenda,
where
we
probably
need
a
different
block
of
time
where
we
can
pick
start
this
plan,
like
yeah
I,
mean
we'll
come
to
that,
let's,
let's
go
go
through
yeah
I.
Think
the
next
one
is
so
about
this
click
on
where
we
have
about
a
month's
time,
we
would
be
coming
up
with
items
which
might
be
discussed
over
here.
That's
what
you
said
right
like
as
of
now.
Nobody
has
a
specific
item
which
needs
to
be
put
here.
Does
anybody
have
anything?
Yes.
G
A
B
B
L
L
If
we
have
stuff
to
discuss,
we
should
try
to
be
more
often
and
I
put
on
the
agenda.
That
I
think
we
actually
need
time
separately
to
talk
about
Federation
and
want
to
attend.
You
don't
have
to
attend,
but
you
have
a
lot
of
planning
that
you
need
to
do
both
in
preparation
for
getting
one
nine
out
the
door
and
for
actually
having
a
strategy
for
getting
to
G
a
cute
card,
but
you
need
it
just
as
a
way
of
marshalling
you
know
the
effort
to
get
stuff
done.
A
Yeah,
so
what
about
this
addition
that
we
have
this
one
next
week
same
time
rather
than
I,
mean
next
week?
I
guess
the
presence
might
be
there
and
that
need
not
be
regular.
Sigma
T,
so
later,
I
have
put
down
a
know
where
it
is
specified
that
we
might
need
some
discussion
out
of
this
Sigma
T,
where
we
basically
chalk
down
on
the
strategy.
The
plan
for
the
next
step,
so
basically
a
that
work
which
we
were
undertaking
to
move
the
Federation
out
of
for
its
sort
of
complete.
A
There
are
couple
of
PR,
such
as
soon
flight,
but
in
a
way
in
a
day
or
two
they
should
be
merged
and
we
should
be
able
to
take
a
new
or
undertake
new
development
pretty
soon
like
this
week
onwards,
we
can
have
PRS,
which
can
be
properly
reviewed
and
can
have
s
and
proper
checks
on
them.
So
that
brings
us
to
the
next
step,
where
we
need
that
planning
or
strategy
to
fix
up
items
that
we
probably
want
to
GA
and
what
would
be
the
way
we
would.
That
could
happen.
L
A
L
Suggest
that
we
don't
have
time
in
ten
minutes
to
do
that
justice
so,
rather
than
and
also
not
everybody
on
this
calls
necessarily
can
be
invested
in
that
discussion.
So
as
I
as
I
said,
I
think
we
need
to
consider
reconstituting
working
group
that
was
meeting
in
August
kind
of
to
actually
get
back
to
the
strategy
of
getting
this
thing
to
useful
State.
H
Yeah
I
think
that
makes
sense.
I
mean
one
of
the
reasons
why
the
working
group
stopped
was
because
if
they
were
either
people,
everybody
was
either
busy
on
something
else
or
busy
moving
stuff
out
of
repo
etc.
So
if
that's
no
longer
the
case,
I
don't
know
what
readout
statuses
but
we're
gonna
free
up
to
work
on
the
server
well
now,
I
think
it
makes
a
lot
of
sense.
Here
is
a
concrete
suggestion.
J
I'm
plus
one
on
the
idea
of
the
Federation
working
group,
but
this
time
slot
actually
presents
a
conflict
for
me
that
eyes
for
another
external
community
meeting
that
is
really
really
hard
to
change.
So
I
may
I
may
take
as
an
action
item
sending
a
doodle
to
the
group
in
the
event
that
people
can
be
a
little
flexible
with
the
meeting
time.
A
L
H
L
So
just
briefly,
I
mean
we
are
at
a
tree
and
you
do
have
CI
running
and
so
technically
we'd
be
able
to
restart
at
the
home.
But
given
that
1/9
is
looming,
I
think
that
it
makes
sense
to
you
know
we
have
to
go
through
the
administrivia
of
actually
getting
Federation
released
when
we're
not
just
part
of
the
humanities
release
process
and
we're
basically
pioneering
this
we're
the
first
project
to
exit
the
tree
and
be
completely
outside,
like
there's
a
bunch
of
stuff,
that's
moving
out,
but
it's
not
actually
done.
E
L
L
Yeah
so
yeah,
it's
good
that
we're
here,
but
we're
still
I
mean
just
a
lot
of
work
to
do
and
my
in
my
mind,
I,
don't
expect
us
to
get
a
lot
of
development
done
by
the
end
of
the
cycle.
I
think
there's
gonna
be
a
lot
of
time,
taking
off
just
getting
the
administrivia
done
and
initially
I
was
like
well.
Why?
Why
even
bother
like?
L
Maybe
we
should
just
wait
till
one
time,
but
we
have
to
do
this
anyway
and
if
we
were,
you
know,
given
the
short
timeline,
I,
don't
think
we'll
get
you
to
get
super
meaningful
work
done
anyway.
So
why
not
just
do
the
stuff
that's
gonna,
be
it
has
to
be
done
anyway
and
once
we're
done
and
we
have
sort
of
a
process
in
place,
we
won't
have
to
do
it
for
one
time,
it'll
be
mostly
there
documented.
L
If
nothing
else
and
it
kind
of
maintains
the
status
quo
of
were
tightly
tied
to
to
kubernetes
release
cycle.
We,
you
know,
maybe
we'll
do
a
little
bit
of
a
delay.
Maybe
we'll
have
dr.
Neff
compatibility
issues,
but
I
need
the
goal
for
110
is,
is
getting
something
and
supportable
and
released
very
close,
its
kubernetes
so
anyway,
but
there's
a
lot
of
detail.
I
think
we
needed
to.
You
can
say
that
actually
into
it
too
much
here,
just.
L
Think,
given
that
we're
not
actually
going
to
be
doing
much
in
the
way
of
work,
I
mean
we
probably
want
to
make
sure
we
do.
Some
testing
make
sure
there's
nothing.
Major,
that's
broken,
but
essentially
1-9
would
be
kind
of
like
the
one.
A
release
without
you
know
new
features.
It
would
just
be
released
separately
like
that,
just
that
amount
of
work
getting
at
a
tree
and
releasing
separately
that's
the
effort
for
one
edge.
Okay,.
H
J
E
J
A
There
was
a
one
more
sedation
here
about
this
release,
but
I
think
you
skipped
that
that
we
can
skip
the
one
line
cycle
also
because
whatever
is
going
to
be
released
would
be
exactly
same
as
the
one
got
it
so
Quinten.
That
USA
USA
listing
is
that
we
should
ideally
release
the
binaries
for
the
nine
also,
but.
H
My
main
motivation
was
just
to
make
sure
that
we
know
how
to
do
it
and
that
we
have
a
process
that
works
so
that
when
one
10
comes
along,
which
will
hopefully
have
a
lot
more
actual
new
functionality
in
it
that
we
we
don't
have
to
like
fall
out,
follow
on
our
faces
not
being
able
to
release
it's
a
draw.
I'd
rather
release
a
one,
nine
that
is
even
if
it
is
identical
to
one
eight.
It
just
know
how
to
release
something.
I
mean.
L
L
Where
I
was
kind
of
like,
why
are
we
gonna
release
for
one,
but
then
we
have
internal
discussions
and
say
as
similar
to
your
thinking,
Quinton
like
getting
through
all
the
administrivia
of
getting
a
release.
You
know
you
know
from
a
different
context.
I
think
doing
that
now
makes
sense,
given
that
you
know
the
time
won't
necessarily
be
better
spent,
you
know
restarting
develop
and
we
do.
We
just
don't
have
enough
time
this
cycle
to
really
do
anything.
Super
meaningful,
yeah.
D
A
D
A
Yeah,
so
that's
agreeable,
yeah
and,
as
I
said,
thanks
a
lot
to
Maru
and
chachi.
They
have
done
a
lot
of
work
for
this
and
you
you're
yeah,
yeah,
okay,
and
so
the
next
question
is:
do
we
need
a
federation
working
group
separately
like
I?
Think
Maru
did
put
this
up
or
fall
back
to
the
same
set
of
people
who
were
doing
this
discussions
earlier?
L
My
concern
was,
it
seems,
like
the
last
few
sig
meetings,
we're
not
having
a
lot
of
time
to
discuss
Federation
the
Federation
is
such
a
it's.
A
really
big
topic.
I
mean
we
have
more
code
than
any
other
effort,
regardless
of
you
know
how
exciting
the
other
stuff
is,
and
so
we
have
a
lot
of
stuff
to
discuss
and
the
question
is:
do
we
want
to
have
that
separate
from
multi
cluster
so
that
we're
not
you
know
getting
in
the
way
of
new
discussion.
L
E
I
mean
also
there's
a
lot
of
working
groups
in
storage,
sort
like
snapshots
and
volume
updates
and
I.
Think.
The
important
thing
is
that
the
working
groups
make
sure
they
give.
You
know
sort
of
digests
or
updates
to
the
main
sig
so
that
they
don't
completely
lose
touch
with
each
other
and
they're
still
able
to
coordinate
and
I
think
there's
a
lot
of
opportunity
to
coordinate
here,
especially
as
we
want
to
make
sure
that
you
know
we
possibly
incorporate
cluster
registry
and
Federation
and
other
pieces
too.
A
H
H
J
B
J
To
be
absolutely
clear,
if
this
timeslot,
not
the
greatest
for
me,
I've
been
getting
by
because
it's
in
every
other
week,
if
we
were
to
introduce
a
working
group
on
the
off
week,
I
would
be
in
a
bad
situation.
So
I
suggest
that
we
find
another
timeslot
within
the
week
to
have
the
working
group
meeting
and
then
also
we'd
be
able
to
meet
more
frequently
at
that
same
timeslot
than
every
other
week.
So.
A
J
E
J
A
Okay,
we
have
only
six
minutes
remaining
so
yeah,
so
there
are
a
couple
of
other
things
which
probably
yeah.
In
the
last
meeting,
there
was
a
discussion
about
possibility
of
clustered
registry
deployment,
using
hand,
charts
and
I
think
or
fall
either
of
them
signed
up
that
they
might
have
a
look
at
it.
There
is
a
possibility
of
doing
the
same
yeah.
J
So,
like
I,
don't
think
that's
gonna
be
a
workable
solution
because
as
far
as
I
know,
if
you
have
to
use
a
certificate,
that's
signed
with
cn
for
an
IP
address
that
you
don't
know
about
yet
the
things
that
are
built
into
helm
for
creating
certificates
can't
handle
that
as
far
as
I
know
and
I
don't
see
a
reason
to
support
two
ways
of
doing
something,
one
of
which
doesn't
do
all
all
the
things
that
we
needed
to
do
so.
I.
J
Think
at
this
point
it
probably
makes
sense
to
just
continue
with
your
init
binary
and
if,
if
helm
or
something
else,
gets
smarter
or
more
capable
of
handling
this
in
the
future,
maybe
we
can
reevaluate
it
then,
but
I
think
for
now
it
to
support
external
load
balancers.
We
basically
have
to
stick
with
custom
code.
I
would
love
to
be
wrong
about
that.
If
anybody
can
prove
to
me
that
I'm
wrong
I
will
buy
you
a
beer
or
beverage
of
your
choice
at
the
next
cube
con.
A
Yeah
there
was
one
more
question
which
was
which
came
up
while
we
were
discussing
currently
the
bezel
bills
that
we
have,
they
don't
support
multi
Hartman's,
so
we
were
thinking
that
we
can
just
take
that
path
and
move
for
CI
jobs
to
consume.
The
output
from
the
bezel
builds.
If
that
is
something
which
is
in
sync
with
rest
of
the
folks,
here
is
anybody
who
needs
montiek
addition,
vanity,
which
is.
L
L
Set
here,
I
mean
currently,
Federation
is
released
with
kubernetes
and
automatically
gets
multi,
arch
and
I
know.
Some
people
like
to
you
know
run
on
their
little
tiny
machines
whatever.
But
the
question
is
in
the
near
term:
do
we
need
to
support
multi
arch
and
the
longer
term
ghazal
will
support
multi
arch?
The
support
is
preliminary
and
just
not
fleshed
out
or
solid
yet,
but
the
expectation
is
that
those
will
will
support.
You
know,
multi
urge
builds
and
if
we
were
to
move
to
Basel
exclusively
for
bills,
that
would
kind
of
simplify
things.
L
C
C
L
C
H
L
A
L
It's
more
just
like
we've
inherited
a
build
system
which
is
really
complicated.
We've
got
it
working,
but
maintaining
it
over
time
is
gonna
be
expensive,
so
the
sooner
we
can
deprecated
the
make
build
the
better,
but
in
the
near
term
it
sounds
like
I
mean:
do
we
want
to
have
queue
fed
released
for
Mac,
not
just
X.
That's
the
case,
then
we
need
to
maintain
to
make
you
build
for
now.
I
was.
H
Gonna
vote
I
mean
that
the
pledgie
build
system
is
just
so
terrible.
I
would
say
that
we
would
be
prepared
to
use
while
I
were
pretty
big
pill
to
get
ourselves
out
of
that
hell,
and
that
bill
is
like
not
having
a
binary
for
cube
fed
for
two
months.
That
seems
like
a
reasonable
price
to
pay
to
get
out
of
that
build
help.
H
A
L
B
A
B
So,
in
the
past
few
weeks,
I've
been
doing
a
lot
of
work
to
get
been
during
dependent
vendor
dependencies
into
the
cluster
ready
register
repository.
That
work
is
much
done
in
the
PR,
but
is
waiting
on
final
review
and
submission
I
think
the
most
important
outstanding
thing
is
I
have
two
proposals:
PRS
against
the
community
repo
to
submit
the
API
design
and
the
project
plan
into
the
community
repo
quinton?
If
you
have
a
chance,
I'd
appreciate
it,
you
could
take
a
look
at
those
PRS
and
I'll
GTM
them
or
add
your
comments.
B
B
Go
and
add
the
comments,
otherwise
I
will
submit
those.
If
I
don't
see
any
any
comments
that
I,
don't
think
can
be
addressed
in
follow.
Ups,
then
I
will
submit
a
post
PRS
on
Friday
and
start
moving
more
forward
with
creating
milestones
and
the
cluster
registry
repository
and
filing
a
bunch
of
issues
at
which
point.
B
B
Okay,
I
think
that
that's
mostly
at
on
my
end,
there
have
been
some
other
updates
ivan,
has
been
working
on
a
updating,
CR,
NIT
tool
to
do
an
aggregated
deployment
model,
and
we
made
some
conceptual
progress
there
in
terms
of
the
what
needs
to
be
done
and
how
the
tool
should
be
set
up
in
order
to
actually
enable
that
mode.
I
think
that's
about
it.
From
my
side,
Thank
You,
Ramon,.