►
From YouTube: Kubernetes Federation WG sync 20180629
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
The
couch
has
been
setting
up
a
cube,
111
cluster
to
be
able
to
test
against,
like
the
CI
is
passing
for.
I
can
manage
cluster
test
manage
cluster
but
I'm
like
oh.
It's
actually
set
up
a
real
cluster
because
you
darker
and
darker
cluster
or
does
not
work
with
my
setup
I
think
it's
actually
something
really
durable.
I
haven't
used
darker
and
darker
in
a
while
Thank
You
biz,
not
supporting
one
long
run.
C
So
it's
like
okay
I
have
to
go
and
figure
out
how
to
actually
deploy
a
cube
for
real,
real
vm's
or
something
which
to
me
is
just
like
a
huge
burden,
but
not
to
me
is
like
what
needs
to
be
done.
You
know
the
things
were
contesting.
We
don't
know
that
they
work
on
a
real
quest,
sir.
If
there's
gotchas
to
to
controller
interactions,
we
have
to
figure
them
all
I'm
tempted
to
say
we
could
I,
don't
know
right
now,
I'm
kind
of
stuck
on.
Do
we
merge
this?
It's
a
huge
thing.
C
The
tests
are
passing,
you
know,
modulo,
you
know,
while
we've
proved
all
the
changes
to
your
code
for
the
code,
even
maintaining
I
think
that's
fine,
but
it
does
break
people's
ability
to
deploy
and
I'll
verify
that
111
works.
We
obtained
readme
and
something
else.
How
do
people
feel
about
this
transition?
Point
I
wish
to
move
forward.
B
So,
for
me,
I
would
say,
like
I
think
we
need
a
couple
folks
to
do
a
manual
test
and
make
sure
they
can
deploy
it
on,
like
a
real
cute
cluster
I'm
happy
to
be
one
of
those
people
we
can
also
like
we.
We
have
a
some
experience
so
far
at
Red,
Hat,
deploying
Federation
v2
and
having
it
work
with
basically
arbitrary
cubes.
So
we
can
definitely
we
can
definitely
offer
up
that
kind
of
testing.
B
A
Yeah
that
sounds
logical
to
me.
However,
III
I
think
I
did
communicate
about
this,
that
we
wouldn't
be
able
to
work
to
speak
on
them
like
this.
So
I
had
a
brief
look
on
the
PR
today
and
no
way--like
brief
was
really
brief.
I
guess
we
can
spend
some
time
now
having
more
people.
Look
at
what
are
the
changes?
I
think
it's
mainly
in
the
infrastructure,
and
there
might
not
be
many
comments,
but
just
to
have
a
look
at
it
and
understand.
A
A
B
Where
we
are
now
are
fun
and
we're
trying
to
we're
trying
to
determine
whether
we
want
to
get
a
higher
degree
of
certainty
before
we
do
that.
That's
that's!
That's
the
conversation
we're
trying
to
have
right
now,
like
the
the
CI
tests
work,
but
we
know
that
the
tests
are
very
simple
and
don't
like,
don't
necessarily
like
they
don't
exercise
working
with
a
real
cluster.
That
plea
exists
outside
the
life
of
the
test
right
so
I.
C
Think
I
mean
the
same:
tests
are
run
in
both
integration
and
non
integration.
The
difference
is
that
an
unmanaged
employment
will
have
like
war,
be
more
representative
cute
control
playing
it'll
have
things
like
nice
base
controller,
which
we've
had
interactions
with
in
the
past,
and
you
know
deployment
and
replicas
like
controllers.
So
that's
really
the
catch
is
we
don't
have
a
good
way
of
learning
web
integration.
A
So
so
I
think
you
can
probably
take
a
week
and
III
agreed
that
it
might
take
us
some
time
to
actually
set
up
equally
kind
of
environment
for
this,
that
real
questions
are
being
used.
C
The
suggestion
isn't
that
we
create
an
automated
e
to
e
like
job
like
we
do
want
that,
but
I
don't
think
that's
a
blocker,
it's
more
like.
Can
we
reliably
like
set
something
up
and
have
it
work
locally
and
it's
a
reproducible
like
that?
That's
the
key
for
me
and
I.
That's
why
to
me
it's
kind
of
problematic.
We
don't
have
a
lightweight
solution
like
many
cube
to
validate
it
again,
because
for
all
intents
and
purposes
that
is
representative
for
the
purposes
of
Federation-
and
we
just
don't
know
that
yeah.
A
So
it
seemed
like,
but
then
you
were
explaining
little
about
what
you
did.
Try
seem
like
you
did
trident.
What
is
the
blocker
you
did
face
with
mini
cube
like
I.
Remember
that
we
had
had
this
discussion
that
one
of
the
directions
are
one
of
the
branch
data
teams
we
can
try
to
set
up.
This
test
is
that
we
can
have
local
mini
cube
pressure
setting
clusters
in
the
in
the
current
setup
that
we
have
in
the
Travis
CI
setup
itself,
and
that
might
be
sufficient
first,
like
it
might
not.
A
C
I
mean
I
getting
darker
and
darker
might
work
I'm.
Just
my
current
experience
trying
to
get
darker
and
darker
working
is.
It
doesn't
actually
work
on
my
system.
I
spent
a
lot
of
time
just
having
to
upgrade
to
the
latest
version,
talker
figuring
out,
like
how
qab
a
cube,
a
DM,
darker
and
darker
cluster
like
how
to
separate
flags
to
get
that
running
at
least,
but
then,
when
it
tries
to
run
it
can't
actually
run
my
controller
manager
like
it.
It
dies
no
problem,
but.
A
D
A
C
B
C
A
How
about
this
proposal,
like
I,
have
a
suggestion
so
like
we
did
decide
that
we
would
try
to
cut
out
a
release
end
of
June,
any
which
phase
I
mean
that's
what
I
remember,
and
not
keeping
any
of
for
the
features
as
necessary.
Prerequisite
for
that
particular
release.
So
does
it
sound
like
not
acceptable
that
whatever
shape
the
already
merged
PRS
are
already
March
features
are
in
and
whatever
is
the
mechanism
of
deployment
like
the
readme
suggests
that
if
you
want
to
deploy
this,
you
have
to
follow
these
steps.
A
It
has
a
user
readme
which,
which
gives
you
steps
to
set
it
up,
and
probably
we
might
also
need
to
add
a
little
bit
of
feature.
We'd
means
that
if
you
want
to
use
this
feature,
you
have
to
do
this.
If
you
want
to
use
that
feature,
you
do
that
and
then
we
do
this
release
cycle
weekly
so
and
the
parrot
eye
is
like
having
a
e2e
or
this
kind
of
a
setup.
Is
that
sort
of
a
necessity
for
us?
A
C
Thinks
here
these
are
I,
think
they're
mandatory
for
alcohol
doesn't
mean
there's
kind
of
the
chain
of
we
don't
want
to
ship,
something
that
doesn't
use
CR
DS
because
it'll
be
confusing
to
scratch,
so
switch
to
Q
builder
I.
Think
is
you
know,
that's
something
we
need
for
alpha
and
then
dependency
is
okay.
Well,
we
need
to
be
able
to
actually
deploy
this.
We
need
to
be
able
to
have
a
cube,
111
cluster,
deploy
it
and
have
the
test.
C
A
C
Mind
like
I,
wouldn't
want
to
necessarily
say
yeah.
Here's
Federation
and
the
only
way
you
can
test
it
is.
You
have
to
deploy
a
real
cluster
like
we
spent
some
time
like
at
Summit
and
otherwise
making
sure
was
possible
to
do
on
many
cubes,
because
that's
a
really
lightweight
like
entry
point
for
someone
who
wants
to
kick
the
tires
on
so
to
my
mind,
like
I,
think
we
can
basically
tie
our
release
to
mini
cubed,
1
logins
being
available
so
that
we
have.
C
B
But
it
means
that
we're
gonna
introduce
like
additional
complications
like.
We
would
definitely
need
to
have
a
script
or
something
like
that.
That
deployed
it,
and
that
seems
like
a
fair
amount
of
work
to
do,
especially
when
it
seems
from
where
I'm
personally
sitting
right
now,
like
the
cute
builder,
refactor,
is
just
around
the
corner.
B
B
A
B
Honestly,
I
think
that
if
we
absolutely
positively
had
to
get
something
into
a
release
like
today,
I
would
say
that
probably
means
we're.
Looking
at
building
a
helmet
art
because
helm
is
helm,
is
fairly
widely
available
and
can
do
the
SSL
certificate
generation
stuff
that
we
need
to
do,
but
I
I
would
not
want
to
go
down
that
road.
Unless
we
felt
like
we
absolutely
had
to
and
I
don't
it
doesn't
sound
like.
We
feel
that
way.
Based
on
what
I'm
hearing
right
now.
C
C
B
Gonna
say
that
so
the
specific
issue
that
we
ran
into
conceptually
with
cluster
registry
is
that
there
was
a
mode
of
deploying
cluster
registry
where,
like
the
CR,
init
binary,
created
a
balancer
service
and
then
had
to
wait
for
the
the
service
to
get
an
external
IP
and
helm.
Can't
do
that,
but
I
don't
think
that
that
is
a
requirement
for
how
we
would
need
to
or
I
don't
think
that
would.
We
would
have
the
same
requirement,
at
least
for
like
an
initial
alpha
release
of
Federation.
B
C
B
That's
similar
to
how,
for
example,
like
sto
releases
are
deployed
for
Ischia.
There's
you
basically
you
you
literally
do
like
a
cute
control
apply
from
a
URL,
which
you
know
you
don't
have
to
use
a
URL.
You
can
download
the
file,
but
there's
basically
a
static,
camel
file
that
deploys
everything.
You
need
that
that
to
me
is
a
very
attractive
way
to
deploy,
and
it's
not
it
like.
What's
really
facilitating,
that
is
not
so
much
queue
builder
as
it
is
that
we're
using
crts.
B
So
we
don't
have
this
problem.
We've
got
to
have
a
distinct
API
server
that
can
serve
TLS
to
the
aggregator,
which
just
makes
the
whole
shebang
like
much
much
easier
to
deploy
and
as
long
as
I
have
the
bully
pulpit.
I
will
just
like
say
also
one
reason
why
and
I
say
this
as
a
great
proponent
of
aggregated
API
servers
why
CRTs
are
fundamentally
superior
to
an
aggregated,
API
servers.
B
C
Time
I
got
just
I
just
need
a
cube.
One
love
them
like
cluster
I,
really
care.
How
I
haven't
really
done
this
in
a
while
and
I
usually
find
it
takes
a
certain
amount
of
friction,
I've
actually
deploying
in
clusters,
because
all
the
infrastructure
and
all
the
things
that
can
go
wrong,
but
who
knows?
Maybe
it
could
be
something
that's
cute,
corn
or
like?
C
Maybe
these
employment
tools
and
I
haven't
touched
on
the
TV,
so
if
I
can
find
a
way
to
do
it
and
or
somebody
else
can
find
a
way
to
do
it,
it's
reproducible
by
anybody
else,
then
I
think
we're
good
to
go,
and
we
can
I
mean
parallel
to
this,
I
mean
updating
the
docks
and
the
readme.
For
this
new
sort
of
state
of
the
world
is
probably
something
that
can
be
done,
while
we're
making
sure
it
actually
works
and
to
me
those
are
the
kind
of
the.
D
C
E
C
C
C
C
D
C
A
B
B
C
C
Yeah
something
I
was
talking
about
with
Paul,
with
with
the
move
of
cluster
registry
to
have
namespace
clusters.
I
think
we
need
to
move
federated
cluster
and
to
be
namespace
as
well,
and
so
they
would
probably
exist
in
the
same
place
with
the
secrets
for
authorizing
micah.
Dedicating
is
those
clusters
in
the
Federation
system,
name,
saying
I
think
we
need.
We
want
to
do
that
pre-alpha
just
because
the
switch
from
namespaces
namespace
is
kind
of
problematic.
A
A
C
B
B
While
we've
been
doing
kind
of
the
bread
and
butter
stuff
of
getting
this
project
spun
up
and
I
have
some
have
some
ideas
but
I'm
not
I'm,
not
a
hundred
percent
sure
if
they'll
be
useful
or
they
are
good
ideas,
but
I
have
some
ideas
that
I'd
be
happy
to
talk
about.
Probably,
and
probably
today
is
not
a
great
time
to
do
it,
since
we
don't
have
a
lot
of
people,
but
maybe
in
the
next
couple
weeks
we
can
dust
that
discussion
off.
C
C
A
A
A
Yeah,
what
I
was
saying
is
that
I
think
we
already
had
one
like
post
alpha.
We
have
some
other
pointers,
also
to
talk
about
I,
think
that
was
having
a
layer,
usability
layer
on
top
of
the
foundation,
API
Sunday
and
now
what
Paul
mentioned.
So
we
can
add
these
to
either
an
issue
in
github
or
someplace
trackable
so
that
we
don't
lose
them
yeah.
B
That's
a
good
idea.
I'll
just
say
like
as
long
as
we're
talking
about
things
were
interested
in
doing
post
alpha,
like
I'm
already
created
an
issue
for
this
room
and
I
have
have
talked
about
this
fair
amount
like
we
think
that
it
should
be
possible
for
you
to
run
a
like
make
SCR
make
a
resource
that
indicates
that
another
resource
should
be
federated
and
and
have
a
controller
that
generates
new
CR
DS
for
you,
based
on
like
an
input
resource,
so
that
you
can
automatically
better
eight
other
resources
that
Federation
doesn't
know
about.
D
A
A
A
Okay,
so
I
suggest
so
you
mentioned
these.
Are
the
blockers
like
these
basically
other
items
which
we
need
to
take
up
or
follow
with
the
community,
like
the
support
and
mini
cube,
401
190
to
be
followed
with
the
community
right?
So
does
it
make
sense
to
assign
sort
of
owners
to
this?
So
we
can
individual
person
and
track
called
not
that
I
think
like
that
might
be
some
support.
I.
C
Yeah
so
I'm
I
think
like
in
terms
of
the
cube
builder,
PR
I,
think
there's
a
tendency
on
her
fondness,
Joshy,
well,
making
sure
that
the
changes
I've
made
to
their
code,
yeah,
Yui,
testing,
I,
think
I'm,
I,
guess
I'm,
just
gonna
follow
Paul
suggestion
and
just
see
those
wall
clusters
as
I
had
that
experience
with
us.
But
if
other
people
want
to
use
other
deployment
mechanisms,
I
mean
getting
familiar
with
the
new
cube
builds
our
way
of
doing
things.
Think
I
may
not
have
updated.
I've
been
working
with
the
deployment
and
delete
deletion.
C
Scripts
around
trip,
stuff,
I'll
pick
that
make
sure
that's
in
the
PR
so
that
you
can
use
it.
I
mean
the
goal
is
that
you
can
do
exactly
what
you
would
do
in
a
server
gallery.
You
could
run
like
you
know,
delete,
Federation
and
and
apply
Federation
and
it'll.
Basically
like
clean
up
the
cluster
redeploy
it
and
then
you
can
run
after
that.
Not
sure
anybody.
D
C
A
So
I
have
a
couple
of
questions
now,
so
you
mentioned
that
you
did
try
the
latest
then
when
you
say
that
I
am
assuming.
You
are
referring
to
the
cube
ADM
doomed
cluster.
That
Apple
name
is
that
I
guess
a
so?
Is
there
any
I
mean
any
pattern
or
any
specific
point
that
you
could
list
out
like?
Maybe
there
is
an
issue
or
there
is
an
open
issue
in
that
rapport
which
might
need
to
be
followed
up,
and
it
might
be
easier
to
do
that
or
it
was
like.
C
C
Setting
feature
yeas
by
default.
It
says
now
propagation,
but
it
doesn't
test
some
resource
of
resources.
So
that's
just
to
get
the
cluster
up
and
running
and
then
I
have
problems
that
I
think
are
related
to
like
darker
and
darker
is
very
dependent
on
how
you
configure
the
hosts,
docker
and
I.
Think
there's
some
problem
there
and
I
don't
know
if
I
have
the
patience
to
try
to
figure
it
out.
C
There's
just
a
lot
of
there's
a
lot
of
moving
pieces,
I'm,
just
kind
of
remembering
she's
having
for
the
talker
and
dark
around
it's
the
same
basis.
It's
really
to
lose
all
the
details
to
the
viable
solution,
like
the
only
reason
I
was
suggesting
that
we
consider
using
darker
and
darker
for
CIE
that
tests
intra
is
maintaining
a
solution.
C
They
like
never
doing
work.
So,
as
you
know,
maybe
it
would
be
harder
to
reproduce
locally,
but
in
terms
of
running
tests
on
testing
for
maintain
cluster,
that
should
work,
and
if
it
doesn't,
then
you
know
that
group
will
actually
devote
resources
to
fixing
it.
Whereas
with
this
it's
like
Oh,
darker
and
darker
doesn't
work
on
your
machine.
Sorry,
you
don't
have
a
lot
of
recourse.
We
have
to
just
self
the
court,
because
every
machine
is
different.
Every
operating.
D
C
And
doctor
interaction
is
different.
Every
now
propagation
you
know
configuration
is
different.
It's
just
like
it's
super
complicated
so
as
much
as
I'm,
not
a
lover
of
VN
based
clusters,
because
there's
so
many
things
that
can
go
wrong.
I
think
talker
and
Dockers
are
usually
using
whoops
hurts
me
to
say
that
because
I
actually
I
really
like
lightweight
clusters,
but
I.
C
C
A
C
A
A
C
A
B
B
A
Me
yeah
actually
like
this
brings
me
this,
like
we
already
have
a
couple
of
jobs
in
testing
truck
I
mean
maybe
I
we
should
maybe
we
should
delay
this
discussion
to
after
alpha
or
some
other
time
yeah.
What
was
coming
to
my
mind
is
that
for
Federation
v1
we
had
quite
and
quite
a
good
amount
of
resources
reserved
for
filtration.
Even
so,
there
were
like
three
or
four
I
think
at
least
three
different
versions
of
K
tests
that
we
were
testing
and
there
were
jobs
for
them.
A
I
really
don't
have
the
exact
right
idea
of
what
is
our
status
of
them,
but
as
far
as
I
understand,
they
should
not
be
removed
abruptly
they
were
being
removed.
You
would
get
a
notice
or
something
like
that,
so
we
can
we
utilize
that
for
v2
like
categorically
migrate,
one
and
then
the
next
and
next
we.
E
A
C
A
C
Control
plane
we're
not
really
validating,
like
the
lower
level
cube
LED
and
whether
it's
using
dark
wind
doctor,
a
real
doctor,
I,
don't
think
that's
relevant,
so
the
advantage
would
be
is
rather
than
having
to
spin
up
a
whole
bunch
of
EMS
and
take
a
lot
of
time.
You
spin
up
a
cluster
within
a
pod
on
the
top
cluster
and
now
you're
in
one
cluster.
A
C
A
Yeah
so
that
first
I
had
one
more
question:
I
guess
so
yeah.
So
the
question
was
about
this,
so
if
we
want
to
continue
so
Maru
when
you
mentioned
that
you
mentioned
something
like
the
test,
central
guys
also
might
be
supporting
this
darker
and
darker
setup
right.
So
how
does
that
look
like?
Is
that
usable
or
it's
already
like
in
some
job?
So
that
might
be
one
of
us
that
might
be
the
first
to
go
so
consumers
of
that
I.
A
C
B
C
A
Okay,
so
then
what
might
be
a
best
addition
to
proceed
towards
alpha,
given
in
a
situation
like
many
cube,
not
necessarily
comes
up
with
a
solution
to
that
1.11
and
we
the
doctrine
box.
The
last
option
that
might
remain
to
us
is
like
at
the
beginning
of
the
meeting
we
were
talking
about.
We
can
validate
at
multiple
sites,
maybe
using
a
managed
cluster
cetera,
right,
yeah.
C
D
B
A
C
C
A
B
Yeah,
that
is
that's
a
good
time
for
me.
I
just.
D
C
Just
an
FYI
I'm
I'm
off
Wednesday
through
Thursday,
or
it's
like
just
rest
the
week
and
we're
off
Wednesday.
So
my
hope
is.
We
have
three
days
I.
Think
getting
this
purely
getting
the
intent
just
working
and
getting
the
PR
emerged
and
probably
getting
the
saturated
culture
means
they
said
all
seems
doable
to
me
in
that
timeframe,
getting
the
readme
and
darks
updated
and
making
sure
main
queue
is
available
for
one
woman,
I
think
those
are
not
things.
C
What
I
need
to
do,
my
hope
is
I
can
make
a
push
to
get
those
three
tops
done
by
the
end
of
Tuesday
and
then
everyone
who's
still
working
can
polish
that
off
Thursday
Friday
there
Wednesday,
if
you're,
working,
okay,
okay,
so
yeah
the
hope
would
be
is
either
by
the
end
of
the
end
of
next
week
or
early
the
following
week.
We
have
we're
in
a
position
to
release
alpha,
and
maybe
there
are
many
cute
support
will
be
the
Walker
and
not
anything.
We
have
control
over.
A
Yeah,
let's,
let's
keep
it
like
this.
So
if
we
have
a
1.1
one
cluster
like
an
or
either
of
us,
have
a
1.1
1
cluster
and
it's
like
the
pure
reproducible
step
to
set
it
up
and
run
the
a
2
a
test
or
whichever
test
that
we
are
talking
about
and
they
worked.
Any
mini
cube
should
not
be
a
profit
for
us
sounds
ok.
C
B
C
C
Sorry,
so,
just
for
folks
who
are
never
the
cube
builder,
when
you
enable
sub
resource
like
status,
which
we
use
in
several
of
the
resources,
and
you
try
to
create
that
C
or
D
and
a
cubed
less
than
111
cube
API
server,
it
will
fail
validation.
It
will
not
be
able
to
create
that
C
or
D.
The
reason
that
scaling
is
not
because
the
fields
in
question
are
actually
useful.
They
just
happen
to
be
generated
by
cubes
over.
If
we
wanted
to,
we
could
go
through,
as
we
have
already
and
just
remove
the
offending
field.
C
B
It
might
be
that
if,
if
the,
if
it's
gonna
take
a
while
to
get
to
111
on
mini
cube,
which
we
should
go
through,
the
exercise
of
Red
Hat,
like
Maru
and
and
Lindsay
I,
think
you're
aren't
you
we.
We
need
to
go
through
the
exercise
of
let's.
Let's
try,
this
against
110
with
the
offending
open,
API
fields
removed
and
if,
if
that
gets
us
some
much
wider
and
more
generally
available
set
of
versions
that
we
can
run
on.
D
B
C
C
Maybe
I
mean
I'm
kind
of
like
I'm
going
back
and
my
idea
that
think
you
need
to
be
supported.
Maybe
the
initial
alpha
version
will
simply
reuse
8110
if
Paul's
experiment
plays
out,
but
it
won't
support,
mini
cube
with
profiles
and
people
who
deploy
real
clusters,
but
I
think
we
should
push
many
cube
to
get
the
profiler
shoot
back
like
we
haven't.
D
B
Will
at
Red
Hat
will
investigate
the
possible
workaround
of
like
if
we
could
remove
the
offending
open,
API
stuff
and
deploy,
for
example,
against
a
you
know,
1/10
cluster
in
some
public
cloud,
I
think
that
would
be
that
that
the
experience
we
get
from
that
would
probably
be
very
useful.
So
we
will
validate
that
and
see
if
it
works.