►
From YouTube: Kubernetes Federation WG sync 20180725
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
D
A
A
What
I
can
do
is
probably
Poston
say
it
said,
say
a
question
kind
of
a
thing.
You
set
up
an
agenda
in
slack
and
if
there
is
nothing
that
we
want
to
talk
about,
the
penny
from
you
can
see.
That's
particularly
is
nothing
yeah.
For
today
you
can
maybe
take
an
introduction
of
you
I'll
you
yeah,
so
yeah.
E
E
E
How
does
the
release
that
the
g1
in
last
day
so
I
have
just
the
update
map
here
to
make
it
to
be
to
make
that
if
federal
way
to
is
now
depending
on
the
builder
or
one
on
the
arrow,
can
guess
I'm
recalling
there
and
another
thing
that
for
the
for
the
PRF
of
control,
I
think
that
we
we
already
have
many
discussion
there
and
I'm,
not
sure
I.
If
we
can,
if
we
can
merge
the
share,
furs
and
I
can
create
some
follow-up
years
to
address
the
common
there.
E
I
know
that
we
have
some
some
comments
like
we
want
to
improve
our
handling
cases
and
also
may
want
some
root,
refactor
twelve,
for
both
strong
and
and
wrong,
so
that
we
can
learn
some
common
utilities
etc,
and
we
may
also
need
to
update
the
document
to
tell
the
customer.
We
have
a
new
command
of
a
drone
to
enable
the
customer
can
easily
to
manage
how
to
draw
and
undrawn
cluster
into
the
Federation
control
panel.
Yeah.
C
C
Guess,
I'm,
okay
with
merging
it,
don't
really
yeah
definitely
need
some
more
work
before
it's
like
usability
is
good
but
sure
yeah,
okay,
I,
think
I'll.
Take
a
look
at
that.
As
for
the
Q
builder,
here
I
haven't
looked
at
it
since
it
changed
one
out,
I
mean
what
was
the
actual
impact
of
this
change.
C
E
B
C
E
I
was
so
yes,
so
in
my
patch
I
also
found
that
currently
we
are
using
a
hack
way
because
the
dependency
error
going
current
is
in
the
vendor.
We
also
fix
a
bug.
There.
I
think
that
we
may
also
need
to
update
the
kubernetes
dependency.
We
may
also
want
to
update
the
API
or
into
a
new
warrant
to
make
sure
that
we
don't
need
to
update
the
dependency
word
of
the
error,
dot,
cool.
C
E
C
C
Builder
anyway,
so
that's
technical
discussion.
We
can
yeah
Oh
wine.
The
other
dependency
on
the
other
problem
that
we're
kind
of
working
around
is
the
Q
builder
is
not.
Shipping
was
previously
not
shipping.
111
binaries
is
that
change
for
one
oh
yeah,
so
they
are
shipping
111.
We
can
remove
the
hack
and
the
download
binary
scripts
I.
A
About
that
yet
no
I
think
they
look
like
about
it.
So
you
are
you
saying
that
cube
builder
now
is
shipping
1.11
kubernetes
binaries,
no.
A
Yeah
I
would
also
like
red
much
on
upgrading
to
a
newer
or
the
latest
version
of
cube
builder.
Only
I
mean
give
it
more
priority
if
it
actually
updates
the
version
of
when
version
of
component
is
that
it
is
shipping
to
1.1
or
something
more
otherwise,
I'm,
not
sure
if
it's
like
changing
everything
for
this.
C
A
Yeah,
so
one
was
Shashi
and
Paul
probably
had
been
have
been
discussing
about
upgrading
the
behavior
of
the
multi
clustered
DNS
API.
Like
last
meeting,
we
had
some
talk
about
moving
the
functionality
out
of
four
out
of
the
support,
moving
all
the
functionality
inside
rapport,
so
she
has
done
some
updates
that
you
want
to
give
some
review
of
what
you
are
doing
or
what
you
have
completed.
I
saw
that
you
have
basil,
TIA,
yeah,
more
or
less
I
have
summarized
that
in
the
PR.
A
Actually,
so,
instead
of
bringing
that
repo
and
that
the
envelope
this
tradition
in
v2
instead,
maybe
yeah.
That
depends
on
the
decision
like
okay.
How
do
we
want
the
core
structure
to
be
maintained
like
in
multiple
repose
or
in
single
rapport,
and
then
be
more
modular?
That
kind
of
decisions
I
think
this
feature
as
such
I?
A
Don't
think
it
requires
another
repo
as
such,
the
only
original
reason
why
I
meant
and
another
fo
was
because
of
the
DNS
provider
library,
actually
that
used
to
endure
lot
of
provider
libraries
and
now,
since
we
are
planning
to
use
external
DNS,
we
are
no
more
going
to
use
that
and
our
controller
codes
in
the
what
simpler
and
we
don't
render
either
external
DNS
all
DNS
provider.
Libraries
so
I
feel
it's
a
very
small
change
which
can
be
incorporated
within
tradition.
A
So
maybe
it's
it's
not
a
tough
job
to
put
it
to
another
app,
also
so
going
forward
anyway.
If
we
are
using
external
DNS
alone,
I
think
you
need
to
make
that
switch
sometime,
I
think
itself,
I
think,
let's
go
add
with
the
external
DNS
related
path.
Instead
of
keep
maintaining
the
DNS
provider,
love
the
Confederation
v1.
So
that's
my
opinion.
It's
okay
like
if
somebody
else
wants
to
maintain
it.
It's
fine,
so
I
think
so.
A
It
makes
sense
to
push
that
particular
feature
into
condition
we
to
itself
and
make
it
modular
bye
bye,
be
of
some
we
are
for
like
installing
or
not
installing
that
feature.
So
that
should
be
good
enough.
It
will
reduce
a
lot
of
worid
like
I.
We
don't
need
to
make
another
docker
image
and
another
release
and
CI
and
other
stuff.
Just
for
that
feature.
So
that's
my
opinion,
yeah
I
think
yeah.
It
depends.
Okay,
were
you
able
to
sync
with
all.
G
A
Ie
was
into
conference
and
I
could
make
it
yeah,
but
anyway,
I
informed
all
the
stuff,
so
I
think
that,
and
particularly
right
now,
I
think
I'm,
more
or
less.
This
PR
is
almost
ready,
except
the
test.
I
think
probably
I
should
be
done
in
a
very
unity
and
then
now
I'm
passing
little
changes
in
external
DNS
actually
to
make
that
like
programming
or
DNS
explained
it's
nice
that
we
are
I'm,
try
to
push
it
to
their
release.
A
I
suggest
you
provide
a
small
writer
place,
what
you
are
thinking
of
doing
before
actually
going
ahead
and
writing
code,
because
this
has
been
discussed
a
lot
earlier
and
in
Federation
v1.
The
way
it
was
implemented,
I,
don't
think
that
kind
of
implementation
makes
much
sense
when
we
talk
about
predation,
be
too
late,
for
example
like
in
Somalia
meetings,
we
had
had
a
discussion
that
this
particular
this
particular
feature
can
be
implemented
as
a
tool
also
which
need
not
necessarily
be
part
of
addition
v2.
A
D
D
Here
it's
a
relative
new
for
this
group,
but
would
like
to
participate
in
the
work
yeah
and
meantime.
I
have
a
question
to
you.
I
saw
that
we
implemented
some
kind
of
an
IP
ice
for
services
for
ports,
and
so
on
is
it.
It
will
be
possible
to
fit
the
rate
deployment
for
series
between
between
clusters
or
not.
C
D
C
E
C
Not
really
a
matter
of
defining
things
with
Q
builder,
if
you
don't
want
to
I
mean
that
is
one
way
to
do
it.
The
primary
benefit
is
that
Q
builder
will
then
generate
client
and
formers,
and
that
kind
of
thing,
however,
the
the
propagation
mechanism
that
exists
today
doesn't
require
code.
So
you
it's
possible
to
create
the
CR
D
and
in
the
host
cluster,
create
that
C
or
D
and
all
the
member
clusters.
C
Then
there's
some
configuration
involved
like
you
can
create
a
CRT
for
the
template
and
for
the
placement
of
the
resource,
and
then
you
can
create
a
federated
type,
config
that
prefers
to
the
CRT
you're,
targeting
the
template,
C
or
D,
and
the
placement
C
or
D
so
essentially,
all
you're
doing
is
creating
CR,
DS
and
then
propagation
can
work.
There's
actually
no
need
to
use
Q
builder
to
generate
any
code.
You
can
I'm
just
saying
it's
not
a
requirement,
but.
C
E
C
Is
for
future,
an
area
of
work
will
be
actually
allowing
creation
of
the
Federation
types,
so
you
specify
a
target
type.
It
can
be
a
CR,
D
or
a
core
type,
and
it
will
generate
the
template
and
placement
for
you
and
like
generate
like
create
those
CR
these
in
the
we're
right.
Now
you
have
to
do
it
manually
because
they
haven't
caught
them
out
far.
C
I'm
curious
I
mean
I,
would
second
or
cons
requests
that
we
start
with
the
design
documents.
I
think
this
is
a
requested
feature,
but
it
would
be
good
to
sort
of
to
basically
get
back
to
use
cases
like.
Why
are
we
doing
this
as
a
precursor
just
deciding
on
the
implementation?
I
mean
we're
I
think
at
least
for
myself,
I'm,
pretty
curious
as
to
what
these
cases
you
have
that
are
driving
you
to
want
to
implement
this.
This
is
a
requested
feature.
I
do
have
a
little
bit
of
concern.
Handing
you
this
feature.
E
E
C
C
The
community
that
you've
been
communicating
with
and
they've
also
expressed
sort
of
an
interest
in
this
feature,
maybe
capture
that
there
as
well,
you
know
to
me:
that's
that's
kind
of
the
motivation
for
a
future
is
like
who's
going
to
use
it.
What
are
they
going
to
use
it
for,
and
then
we
can
figure
out
the
best
way
to
yeah.
E
C
It
may
be
that
this
is
like
a
generic
tool
for
someone
who
has
multiple
clusters,
and
they
just
want
to
have
a
view
into
those
clusters,
and
it
may
be
that
there's
common
elements
in
the
Federation
control
plane,
like
the
cluster
registration
and
the
authorisation,
might
be
shared,
but
maybe
it's
not
doesn't
have
to
be
tied
to
say
propagation
and
be
something
that
someone
can
use.
In
the
absence
of
configuring,
a
propagation
mechanism
yeah
all.
E
C
You
might
have
something
that
could
be
deployed
in
parallel,
like
in
the
same
cluster,
if
that
was
something
that
people
wanted
both
Federation
and
read
access
to
multiple
clusters,
but
maybe
a
user
would
just
want.
You
know
access
to
multiple
clusters
and
they
don't
want
Federation.
So
it
would
really
depend
on
on
the
use
cases
involved.
So
yeah.
E
E
A
G
G
F
A
PR
open
if
anybody
would
like
to
review
it
but
I've
been
doing
some
work
to
get
a
mini,
cube
cluster
working
in
the
Travis
environment.
So
that's
pretty
much
working
for
one
cluster
Travis
does
have
limitations
where
you
can't
have
nested
virtualization
using
their
VM
environment,
and
they
don't
have
any
plans
to
support
that.
So
it
requires
using
mini
cube
without
a
VM
driver,
so
passing
in
fiber,
none
and
that
basically
hosts
a
proven
IDs
cluster
on
the
host
itself,
using
straight
docker
containers,
and
so
that's
working
with
one
cluster.
F
Distribution
release,
it's
called
xenial,
but
it's
not
actually
documented.
It's
some.
There
were
some
hits
online.
There
was
some
hints
on
the
internet
that
suggests
that
that
was
potentially
supported
or
they're.
Gonna
eventually
support
it
later
this
year
it
was
more
of
a
trial
and
error
and
it
looks
like
it's
working.
So
if
you
request
does
annual
distribution
you'll
actually
get
one.
So
as
long
as
Travis
continues
to
support
that
this
mechanism
will
work
in
the
meantime,
I
have
explored
some
of
the
cube,
a
DM
doctor
and
doctor
cluster
features.
F
B
F
Where
you
can
define
some
environment
variables
to
provide
like
a
prefix
to
the
container
names
that
it
launches,
so
it
doesn't
clash
with
the
original
cluster
that
you
create,
along
with
like
a
subnet
for
the
for
that
other
cluster
to
use.
So
it
doesn't
clash
with
the
original
and
just
some
things
like
that.
So
it's
possible.
We
can
use
that
I
haven't
explored
too
much
there
yet
still
looking
at
potentially
taking
on
just
a
different
Federation
feature,
but
in
the
meantime
I've
just
explored
some
ok.
A
A
Why
I
came
to
this
was
like
this
mentioned
a
lot
of
mystery
time
in
trying
to
implement
multiple
cluster
based
infrastructure,
I
think
none
in
the
community
or
interested
in
that
kind
of
a
scenario,
and
only
we
need
to
struggle
actually
to
bring
up
such
a
infrastructure.
I
I
think
it
is
definitely
hard
to
implement
and
then
keep
maintaining
that
so
I
think
most
of
our
test
cases
like
within
a
single
cluster
but
yeah.
Definitely
there
we
might
need
to
verify
more
than
a
single
cluster
I
think
yeah.
It's
not
a
immediate
concern.
F
Yeah,
ideally,
we
need
two
clusters,
and
that
was
my
original
goal,
but
in
order
to
just
for
just
to
expedite
some
better
unmanaged
été
test
against
an
actual
cluster,
this
PR
is
supporting
just
one
cluster.
Now.
The
other
problem
is
that
there
doesn't
seem
to
be
a
non
public
cloud
implementation
that
is
mature
enough,
that
we
can
use
for
multiple
clusters
after
spending
time
investigating
things
like
mini
cube,
investigating
the
testing
implementation.
F
There's
some
rough
edges
around
there.
They
have
a
doctor
and
doctor
and
doctor
solution
with
some
rough
edges.
That's
still
maturing
and
I
was
advised
to
go.
Look
at
cube,
ADN
Adm,
doctor
and
doctor
and
I
was
actually
having
problems
with
cube.
Atm
doctor
and
doctor
on
my
laptop
seems
to
be
resolved
on
an
Ubuntu
distribution,
but
I
haven't
played
around
with
the
parallel
clusters.
C
Think
the
main
advantage
of
having
e
to
e
is
is
less
of
others
and
more
about
having
a
real
control
plane.
We've
had
problems
in
the
past
with
the
real
controllers,
like
interacting
natively,
with
the
controllers
that
we
would
expect
to
I,
don't
know
like
a
namespace
controller,
for
example,
it
can
cause
problems,
I'm,
not
saying
that
there
isn't
a
need
for
e
to
e,
but
I
would
caution.
I
mean
Cube.
Has
a
history
of
just
relying
a
needy
week.
Everything
people
in
the
know
of
the
pushing
and.
C
Don't
know
this
is
kinda.
It's
kind
of
a
gray
area
because
collectively,
like
e
to
e
chests
are
the
same
largely
as
integration,
and
we
should
probably
like
set
a
timeline
for
deprecating
integration
tests
like
in
their
current
forum
I'm,
not
saying
there
isn't
room
for
like
tests
that
use
what
we
call
manage
fixture
or
that,
like
I,
think
we
should
only
have
one
and
it
should
be
probably
kinko
so
that
if
you
want
to
run
an
integration
test
as
like
today,
you
write
it
as
an
integration
test
and
use
go
test
in
the
future.
E
C
B
C
About
API
interactions,
so
I
don't
think
there
should
be
any
limitation
image.
I
guess
my
question
to
you
would
be
the
test
that
you
currently
have
any
integration
and
there
may
be.
This
is
a
question
for
Groupon.
Is
there
any
any
test
that
you've
created
that
couldn't
be
run
in
an
unmanaged
scenario
where,
like
we're
talking
about
a
deploy,
Federation
versus
a
test
management.
A
In
my
scenario,
so
the
tests
which
yeah
there
are
actually
tests
which
actually
need
a
running
test.
You
know
and
for
example,
in
the
replicas
of
scheduling
cases,
there
are
scenarios
which
are
not
tested
because,
because
of
okay,
so
there
are
scenarios
like
a
fog
board
of
a
given
deployment
could
be
healthy
or
unhealthy
in
a
particular
session,
and
access
service
does
not
have
healthy
parts
or
which
keeps
the
parts
in
unhealthy
state
for
a
long
time.
Those
are
supposed
to
be
moved
to
after
staff
there.
The
parts
would
be
scheduled
properly.
G
A
C
That
the
only
thing
we
need
to
validate
is
given
this
replica
counts
like
across
multiple
clusters.
Here's
the
expected
replica
count.
We
are
going
to
like
communicate
okay.
So
it's
not
a
matter
of
like
our
pods
moving
around
I'm,
not
saying
we
can't
test
that
what
I'm
saying
that
is
in
testing
Federation,
that
is,
testing
kubernetes.
The
only
thing
the
Federation
does
is
instruct
kubernetes
what
public
account
we
want
forgiven,
cluster
and
I
I
would
think
that
that
could
be
tested
without
actually
having
multiple
clusters.
C
A
A
C
C
Like,
yes,
you
want
to
be
able
to
make
sure
that
you
can
set,
you
know,
replica
counts
and
they
are
reflected,
but
I'm
anyway,
I'm
being
a
little
bit
automatic
here.
But
I
just
want
to
minimize
the
amount
of
expensive
testing.
We
do
because
it's
really
easy,
especially
in
a
distributed
system
like
kubernetes,
to
spend
a
lot
of
time
running
tests
a
very
little
value
that
I
just
want
to
make
sure
yeah.
A
Yeah
I,
agree
to
what
you're
saying
and
I
am
also
not
emphasizing
that
we
necessarily
need
to
have
it
with
us
right
now.
I
think
what
Ivan
has
set
up
that
itself
has
value
in
terms
of
testing
out,
as
you
mentioned,
a
real
control
plane
with
respect
to
a
real
cluster,
and
that
might
be
that
you
might
be
emphasized
as
of
now,
but
having
a
expensive
set
up,
which
has
multiple
clusters
and
all
that
I.
Don't
really
see
any
great
value
being
brought
in
right
now.
Maybe
when.
A
E
C
A
More
when
I
started,
I
had
one
more
point
about
a
machines
update,
yeah,
so
last
meeting
or
last
last
meeting,
we
actually
did
list
down
couple
of
items
saying
that
these
might
be
possible
future
items
that
we
might
need
to
think
about
or
prioritized
and
so
I
I
think
we
can.
He
pass
I'd
than
half
an
hour's
time
with
some
homework.
Like
everybody
goes
through
that
list
and
maybe
prioritize
it
properly.
We
we
did
that
exercise
in
one
meeting,
I
think
that
was
not
completely
effective.
A
A
A
Yes,
so
you
remember,
we
had
this
list
and
we
did
give
some
priorities
like
whoever
has
this
priority
and
Red
Hat
probably
is
interested
in
this
one
and
that
kind
of
stuff
I
think
that
this
is
not
concluded
or
complete.
So
what
I
am
requesting
thing
is.
It
is
that
we
go
through
this
list
sort
of
again
after
the
meeting
and
then
from
redheads
perspective.
What
you
think
are
the
five
most
priority:
things
that
you
want
to
focus,
or
you
want
the
community
to
focus
they
can
be
listed
out
and
from
always
perspective.
C
C
C
Pick
up
the
Alpha
criticize
comment:
you
know,
provide
feedback
as
to
what
direction
you
know.
Where
are
we
going
in
the
right
direction,
where
we
having
gaps
where
we're
going
the
wrong
direction,
I
think
committing
to
like
heavyweight
development,
and
the
absence
of
that
kind
of
feedback
is
a
little
bit
problematic
and
so
I
would
focus
on
from
the
right-hand
side.
C
I
think
we'd
focus
on
things
that,
where
we
have
some
certainty,
like
we
fix
deficiencies
in
the
existing
capabilities
or
streamline
things
so
that
we
easier
to
you
know,
work
with
it
and
evolve
it
over
time.
That
sort
of
thing
versus
I'm
gonna
do
heavy
weight
feature
X
that
we
don't
act.
So
you
have
user
feedback
as
if
you
know
that
it's
very
important
to
do
that,
make
sense.
Yeah.
E
C
C
Yeah
I
mean
my
emphasis
would
be
is
like,
as
we
move
towards
beta,
like
the
most
important
part
about
alpha,
is
actually
getting
in
people's
hands
and
getting
substantive
feedback
as
to
you
know
what
we're
converging
on
as
we
move
to
beta
and
if
we
don't
manage
to
do
that
and
I
I'm,
not
sure
that,
like
what's
the
qualification
for
beta,
that
we've
gone
through
alpha
and
nobody's
said
anything
about
it.
To
me.
That's
a
failure
of
the
Alpha
yeah.
B
B
C
I
mean
a
line
item
that
we
don't
have
here.
These
are
all
technical
tasks.
I
would
suggest
would
be.
Why
find
ways
to
engage?
You
know
the
wider
community
or
interested
parties.
I
mean
I
think
that
there
is
interest
in
multi
cluster
capabilities
out
there,
and
maybe
interest
has
kind
of
fallen
off
a
little
bit
during
the
transition
to
v2,
and
we
need
to
sort
of
trying
to
find
ways
to
find
those
people
in
V
and
engage
them
again.
C
So
not
really
things
we
can
capture
in
issues,
but
yeah
I
would
I'm,
certainly
sorted
it's
it's
foremost
in
my
mind
these
days
as
to
how
we
can
try
to
like
get
code
working
code
into
people's
hands
and
get
feedback
from
them
like
more
than
developing
new
features.
If
they
come
back
and
say
it's
totally
useless,
it
doesn't
do
this
this
and
this
that
would
be
great
feedback.
I
would
much
rather
have
that
and
they'll
be
saying
anything
bad
on.
A
Bobby's
front
I
know
that
what
we
are
doing
is
especially
with
the
Chinese
community.
I
know
that
I
can
call
them
I,
don't
know
marketing
team
or
whatever,
that
we
are
trying
to
put
out
what
we
have
built,
which
includes
Federation
v2
into
places
like
meetups
and
whatever
there
are
some
conferences
and,
but
so
far,
I
haven't
seen
the
proper
blocks
kind
of
a
thing
put
out
over
there.