►
From YouTube: Kubernetes Federation WG sync 20180613
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
I
am
about
like
50%,
complete
on
that
I
think
that
I'll
have
a
PR
to
post
later
today
and
general,
like
I,
would
like
to
get
to
a
point
where
you,
where
you
can
just
deploy
a
Federation
without
having
the
dependencies
that
you
need
to
develop
it
so
I
just
wanted
to.
Let
folks
know
that
I'm
I'm
sitting
down
and
working
on
stuff
like
that
or
work
in
that
area
right
now,
whoo.
C
D
A
Muro,
would
there
be
any
thing
specific
that
might
that
might
be
useful
for
folks,
like
me
and
Jesse,
who
might
want
to
put
their
it
guys
and
controllers
on
top
of
what
you
are
given
the
building
or
allow.
D
As
we
discussed
last
week,
my
goal
is
getting
something
in
place
like
propose
a
PR,
probably
work
on
getting
pending
work
merged
or
if,
for
some
reason
it
can't
be
merged,
then
coordinating
like
a
rebase
on
top
of
it,
but
the
majority
of
the
work
is
around
shifting
to
CR
DS.
On
the
testing
side,
controller
code
doesn't
really
change
a
lot,
except
for
the
importance
of
the
clients.
The
pods
have
changed
between
generating
clients
for
API
server
builder
and
the
queue
builder,
and
also
in
the
process
of
doing
this.
D
I've
been
cleaning
up
the
naming
as
per
conversations
we
had
in
the
past,
so
that
the
all
the
primitive
stuff
will
be
in
court
Federation
decades
that
IO
the
scheduling
will
be
scheduling.
The
Federation
dockets
thought
I/o.
So
the
past
will
change
as
well.
Aside
from
those
are
kind
of
like
trivial
naming
things
so
I
wouldn't
expect
this
to
have
dramatic
impact
on
any
work
that
you've
done,
and
it
should
be
easy
enough
to
me.
Might
I'm
gonna
do
as
much
of
the
work
as
possible.
A
D
C
D
A
A
A
Yeah,
so
what
I
was
saying
is
in
case:
there
are
two
outstanding
peers
right
now
and
both
of
them
are
much.
Then
we
will
stand
on
parity
with
whatever
functionality
was
there
and
I'd
be
one
with
respect
to
Department
and
efficacy,
and
I
have
already
started
trying
to
put
up
for
some
scheduler
for
HPA,
basically
so
getting
the
ESPY
api
replicated
using
that
same
mechanism.
Is
you
so
that
PR
and
publicize
tomorrow
and
post
that
I
would
actually
be
working
on
getting
the
parity
with
respect
to
vehicle
strike
unit
that
exist
in
v1
yeah?
A
D
Hoping
that
I
mean
I'm
all
for
kind
of
just
making
progress,
pre-alpha
getting
stuff
in
there
and
I'm
hoping
the
post
alpha.
We
can
look
at
the
I'm
gonna
just
discuss
like
application
boundaries
and
be
able
to
do
scheduling
at
the
application
level,
because
we
like
the
Alpha
to
deployments
replica
seta
HPA
a
visually.
D
B
D
B
Another
one
that,
like
you
and
I,
have
talked
about
a
little
bit.
Maru
is
bike.
We're
now
that
we're
putting
some
kind
of
foundational
building
blocks
in
it's
probably
start
time.
It's
probably
time
to
start
thinking
about
what
what
is
the
user
experience
like
in
what
degree
of
like
glue
and
experience?
Can
we
provide
over
the
building
blocks
that
we've
built
so
far,
so
that
things
are
nice
for
users.
D
And
that's
kind
of
a
nice
segue
I,
don't
see
Shashi
on
a
call
he's
been
doing
good
work,
Multi
cluster
DNS
side.
My
comment
on
his
PR,
which
I
think
is
pretty
much
ready
to
go.
It's
Mel
GTM
as
far
as
I'm
concerned.
So
anybody
else
who
wants
to
take
a
look
at
it,
the
caveat
being
when
I
looked
at
it
I
think
the
user
experience
could
be
improved
like
it's,
certainly
not
I'm,
not
suggesting
walking,
murdered,
I.
Think
it's
fine
on
its
own.
D
It's
kind
of
can
be
a
standalone
feature
even
without
Federation,
but
I
think
there
would
ideally
be
a
little
bit
more
integration
between
a
federated
service
and
the
DDoS
record
like
optionally,
but
potentially
having
a
field
in
federated
service
that
had
a
flag
saying
you
know
fettering
T&S
or
something
and
then
a
controller
that
would
read
that
and
generate
the
service
DNS
records
that
sha
she's
implemented
a
controller
for
and
then
the
other
part
was
that
there's
currently
no
actually
Paul.
Why
don't
you
talk
about
the
cluster
name?
Part
of
your
proposal?
B
B
Think
right
now,
the
as
implemented
in
the
PR-
and
this
is
a
totally
sensible
like
first
iteration
on
it,
I
think
right
now
it
it
will
target
the
federated
service
in
all
the
federated
clusters,
and
we
we
may
need
to
loosen
those
semantics
just
a
little
bit
to
make
it
independently
usable
without
Federation,
or
at
least
more
explicit
in
the
API.
It's
something
that
we
can.
We
can
discuss
as
a
group.
I
think
that
sha
she's
PR
is
a
great
first
in
an
additive
I
was
I
was
going
to
ask
Shashi.
B
B
What
I
understand
that
he's
put
together
and
gotten
to
work
is
the
multi
cluster
DNS
record
resource
in
our
project,
then
he's
got
an
adapter
called
Federation
DNS,
that
is
an
adapter
between
multi
cluster
DNS
resource
and
the
external
DNS
sorts
thing
that
he
added
to
external
DNS,
which
is
like
basically
all
right.
I'm,
sorry
I
think
it's
called
a
connector
in
external
DNS.
So
basically
the
Federation
DNS
is
an
adapter
between
multi
cluster
DNS
record
resource
and
external
DNS
I
think
it
would
be
awesome
to
see
all
that
stuff
working
I.
B
Don't
I,
don't
need
that
to
happen
before
it
gets
merged.
I
think
there
might
be
like
if
I
remember,
there's,
probably
like
one
or
two
or
a
handful
of
maybe
like
minor
things
that
don't
block
merge
that
we
can
also
do.
But
it
would
be
awesome
to
get
a
demo
next
week
if,
if
she's
up
for
it,
but
I'll,
take
the
action
item
to
talk
to
him
about
that
and
see
if
he's
comfortable
with
it.
A
Yeah
I'll
also
update
him
and
I
think
should
be
fine.
He
should
be
able
to
give
a
demo
to
the
working
group
next
week.
It
should
be
totally
fine
and
you
know,
while
you
were
mentioning
about
one
one
need
of
targeting
this
particular
DNS
record
to
specific
clusters
or
the
specific
list
of
clusters
other
than
all
the
clusters
which
are
federated.
Doesn't
that
need
get
satisfied
by
the
by
the
placement
placement
basically
for
services
specifies
the
list
of
clusters,
or
do
you
that
different
from
what
placement?
That's
that
is
specified
on
the
targeted
service?
A
So
while
you
started,
you
mentioned
that
Maru
mentioned
that
some
improvements
in
the
usability
of
this
particular
mechanism
might
be
helpful,
and
then
you
mentioned
that
something
comes
up
to
your
mind
that
this
DNS
record
could
be
specific
to
only
particular
clusters
among
the
list
of
edited
clusters.
So
the
thought
that
came
to
my
mind
was
that
this
list
is
sort
of
available
as
part
of
the
service
placement.
Right
is
this.
What
you
mentioned
is
something
different
from
that
yeah.
B
And
and
I
think
like,
if
I,
if
to
put
a
fine
point
on
it,
like
one
thing
that,
like
Marie
and
I
had
tossed
around
is
like
hypothetically,
if
you,
for
example,
could
on
a
federated
service,
say
I
want
I,
want
federated
DNS
for
this,
a
controller
kebaya
automatically
created
the
multi
clustered
DNS
record
resource
for
you
and
maintain
the
like,
like.
If
we
had
that
list
of
explicit
fields,
it
maintained
the
list
of
a
list
of
clusters
for
the
multi
cluster
DNS,
but
just
just
for
for,
like
full
disclosure
and
clarity,
I,
don't
think.
B
B
D
Think
to
clarify
I
mean
the
reason
for
maybe
putting
a
list
of
clusters
on
the
service
DNS
record
instead
of
just
using
placement
is
that
then
DNS
can
be
sort
of
a
self-contained
thing
that
doesn't
require
placement
or
the
concept
of
placement.
If
somebody
wanted
to
just
use
that
feature
entirely
independently.
F
I
just
clarify
something
quickly.
So
what
about
goals?
High
level
goals,
certainly
at
huawei,
was
to
reach
feature
parity
with
v1
and
particularly
with
federated
services,
so
in
v1
the
way
they
work.
Is
you
basically
create
a
service,
and
you
say,
put
it
in
these
clusters
and
it
does
that
as
well
as
verifying
which
of
those
actually
came
alive
in
the
sense
that
the
service
was
reachable
and
ham
pods
behind
it,
and
then.
F
Actively
manages
the
DNS
records
multiple
of
them,
to
make
sure
that
both
external
and
internal
DNS
lookups
go
to
the
right
service
to
the
right.
Actual
cluster
and
I
understood
that
most
of
the
work
that
she
did
was
actually
a
building
block
towards
that
end
goal.
So
is
that
end
goal
going
to
be
achieved
by
the
end
of
June
or
or
or
is
that
end
goal
itself
in
question?
So.
B
I
think
I,
don't
think
the
end
goal
is
in
question
at
all.
What
you
just
described,
Quintin
I,
think,
should
be
possible
to
do
the
so
when,
when
we've
been
talking
through
like
make
the
user
experience
better
I
think
the
like
what
we're
talking
about
is
like
get
to
that
user
experience
that
you've
just
described.
Okay.
What
I
personally
had
in
mind?
Okay.
B
Think
that
is
actually
doable
personally.
Okay,
cuz
I,
don't
think,
there's
a
whole
lot
of
like
glue
to
add
between
what's
in
Shaa
she's
PR,
and
the
experience
that
you
described
with
the
caveat
that
like
what
we've
done
so
far
is
external
DNS
in
the
sense
of
like
that
was
important
choice
to
use
because
there's
this
project
external
DNS.
That
is,
you,
know,
involved,
but
we
still
need
to
talk
through
the
DNS
for
inside
of
cluster.
D
I'm
not
entirely
sure,
that's
the
case
and
Joffe
would
really
need
to
clarify
it.
But
when
I
was
reviewing
that
PR,
there
was
code
targeting
clustered
in
us,
so
I
mean
I'm,
not
pretty
sure
he's
targeting
parity
with
that
I'm.
Just
not
sure
I'm
assuming
it's
implemented
in
his
personal
repo,
but
really
just
get
clarification.
Something.
F
B
A
I
cannot
update
a
little
bit
about
that,
so
so
so
there
are
three.
Maybe
four
yeah
three
pieces
of
this
whole
functionality
so
Quinton
to
answer
your
question
feature
parity
with
v1
after
she's
PR
magis
would
in
in
all
possibilities
be
there,
but
there
will
be
different
pieces,
the
that
needs
to
be
put
in
place
together.
So
what
shall
she
has?
So
there
are
three
pieces.
So
one
is
you
have
four
pieces,
so
there
is
one
service
or
federated
service.
A
You
can
say
that
is
created
using
the
Federation
API
post
RQ,
other
user
has
to
create
the
multi
cluster
DNS
API,
and
so
that
resource-
and
there
are
apart
from
that
two
more
pieces
that
are
needed.
So
one
is
the
DNS
library
that
you
are
referring
to
written
from
Federation
v1,
so
rather
than
using
it
from
Federation,
because
if
we
render
that
from
Federation,
then
we've
enter
a
lot
of
cord
and
then
Federation
of
women
depends
on
K
tests.
So
there
is
a
hell
lot
of
rendering
nightmare
over
there.
A
So
what
she
did
was
just
pull
out
only
that
package
or
those
packages
which
belong
to
this
DNS
provider,
library.
So
there
are
three
providers
in
that
AWS,
Asia
and
GC,
so
that
is
what
exists
in
Shashi's
wrapper
right
now
and
I
think
there
is
an
active
controller
also
that
he
runs
us
to
be
able
to
query
these
unused
these
these
lips,
and
the
fourth
piece
is
the
external
DNS
provider,
which
actually
creates
no
sorry
in
this
case.
A
The
fourth
piece
is
not
there,
so
this
library
itself
is
sufficient
to
create
the
DNS
records
against
our
given
DNS
server.
So
this
much
gives
the
parity
with
v1.
So
if
a
user
creates
a
service
and
creates
a
DNS
multi-class,
a
DNS
service
source
in
Federation
and
this
library
is
also
included
as
part
of
freshies
controller,
then
the
DNS
records
would
be
created
as
and
when
their
service
shards
basically
come
and
go
or
into
the
federated
clusters.
If
a
closure
is
not
healthy,
then
the
shard
will
be
removed
from
the
DNS
server.
B
Can
I
can
I
can
I
repeat
to
you
in
my
own
words,
what
I
think
you
said
and
and
see
if
I
have
the
right
understanding
yep
so
like
I,
think
what
you
said
is
that
the
federation
dns
project,
which
is
which
is
also
the
adapter
between
multi
cluster
dns
and
x,
multiplied
and
the
external
DNS
project.
It
also
has
an
active
controller
that
programs
cluster
dns
for
the
internal
stuff.
That
v1
did
is
that
generally
accurate
yep,
okay,
cool
and.
A
A
So
I
I
remember
one
or
two
comments.
You
guys
Maru
and
Shashi
had
some
points
on
which
is
about
utilizing
some
functionality
from
the
chaos
in
cluster,
the
cube
DNS
that
we
are
talking
about.
So
there
is
some
Federation
specific
functionality
over
there.
That's
to
do
with
the
naming
scheme
here,
so
that
functionality
also
is
required
to
have
this
correct
resolution
for
the
names.
A
F
Yeah
I
just
wanted
very
briefly
clarified,
because
I
think
we're
talking
past
each
other.
To
some
extent,
there
is
a
DNS
server
in
each
kubernetes
cluster
and
it
receives
DNS
queries
from
the
containers
and
it
replies
yeah.
It
has
some
custom
code
in
it
which
says
if
the
name
that
is
being
looked
up
is
a
federated
service
name,
do
something
slightly
different:
that's
that
is
completely
independent
of
external
DNS
and
completely
independent
of
the
DNS
provider,
library
and
completely
independent
everything
else
whispering
about.
If.
A
C
A
So
that's
how
the
cross
cluster
federated
service
discovery
work,
so
local
service
discovery
works
on
some
naming
scheme
right
similar.
So
this
federated
naming
scheme
is
sort
of
an
extension
to
that
local
naming
scheme
where,
rather
than
five
dots,
there
are
some
more
dots
and
that
kind
of
stuff
is
it
I
mean
I
can't
explain.
I
can't
clearly
explain
that
right
now,
because
I
don't
have
the
whole
naming
scheme
on
at
the
top
of
my
head,
but
if
needed
the
next
meeting,
we
can
have
a
longer
description
about
it.
D
So
that's
why
the
state-of-the-art
today
I
would
expect-
maybe
in
addition
but
but
there's
a
potential
for
this.
This
sort
of
cost
cross
Buster
services
gallery
to
rely
on
something
like
sto
based
or
service
mass
based
I'm,
not
saying
that
we
don't
need
this
today,
but
there
certainly
was
a
lot
of
interest
acute
con
for
integrating
sto.
This
kind
of
thing,
yeah.
F
Absolutely
and
and
conversely,
there's
been
a
lot
of
pushback
against
the
current
federated
services
because
they
require
that
you
expose
an
external
service
on
your
cluster
and
all
of
the
balancing
actually
like.
If
you
go
between
clusters,
that
by
definition
goes
out
of
the
one
cluster
and
back
into
the
other
cluster
by
an
external
IP
address,
and
not
everybody
wants
that.
Some
people
want
essentially
internal
cross
cluster
service,
discussed.
B
F
Details
TBD,
but,
but
basically
so
the
two
big
objections
are
one:
relying
on
reprogramming
DNS
to
change
traffic
routes
because
there's
a
essentially
a
hundred
and
eighty
second
potential,
latency
plus
a
whole
bunch
of
funny,
DNS,
caches,
etc
there
that
make
it
not
fast
enough
to
respond
to
failures.
And,
secondly,
you
have
to
expose
these
things
like
externally
to
the
big
wide
world
and
not
everybody
wants
to
do
that.
So.
B
It
seems
like
getting
parity
with
the
v1.
Behavior
is
like
a
really
sensible
first
step.
It
does
sound
like
maybe
as
a
second
step
after
after
althought.
We
should
talk
about
like
how
would
we
realize
these
use
cases
and
I
think
if
I
understand
correctly
by
the
nature
of
the
use
case.
This
is
like
this
is
only
a
use
case.
That's
inside
one
cluster
to
another
community
right.
B
F
The
other
kind
of
closely
related
thing
is
that
we
had
federated
ingress
in
v1,
which
did
l7
proper
load
balancing
for
GCP,
but
I.
Think
there's
a
general
interest
in
in
cross
cluster
ingress
across
multiple
clouds,
I
think
step,
one
being
like
it
should
work.
If
all
your
clusters
are
in
a
cloud
other
than
Google,
you
would
want
that
to
work
and
then
the
next
step
beyond
that
is.
If
your
clusters
are
in
different
clouds,
you
still
want
an
ingress
that
works
across
them
and
obviously
it
gets
progressively
more
difficult.
F
As
you
go
up
that
stack
right
now,
it
it
modulo
some
bugs
in
the
ingress
controller
it
actually
kind
of
works.
Fine.
As
long
as
you
all
your
clusters
are
Google
clusters,
but
I
think
there's
a
because,
because
the
ingress
exposes
an
IP
address,
basically
not
a
DNS
name,
so
there's
no
way
to
like
refactor
stuff.
So
you
have
to
have
a
global
IP
address
which
most
of
the
clouds
don't
have.
So
the
next
step
would
be
to
you
know,
make
it
work
better
with
other
clouds
that
don't
have
global
IP
addresses.
B
B
C
F
F
B
I
tend
to
agree
with
you.
This
reminds
me
there
is
a
a
project
that
is
a
heftier
project
that
has
elements
of
what
we've
just
talked
through
load,
balancing
between
OpenStack
and
kubernetes
clusters,
that
is
in
the
ingress
functional
area.
I
will
try
to
dig
up,
dig
up
a
link
to
that
right
now,
but
it
sounds
at
least
like
functionally
relevant.
B
F
B
E
E
A
F
Very
cool
I
had
one
other
brief
questions
for
discussion
topic.
If
there's
nothing
else
on
the
agenda,
should
I
go
go
for
it
he's
so
so
one
of
the
things
we
left
behind
when
we
went
from
b1
to
b2
is
kubernetes
api
compatibility
and
I
think,
was
the
right
thing
to
do
and
we
did
it,
for
you
know
very
reasonable
reasons,
but
one
of
the
compromises
that
introduces
is,
we
were
speaking
earlier.
F
I
think
you've
mentioned
at
Paul
that
we
want
to,
or
maybe
it
was
we
want
to
deploy
like
the
entire
application
we
don't
want
to
have
to
like
mess
around
with
all
the
little
bits
individually.
So
things
like
helm
obviously
would
be
able
to
do
stuff
like
that.
But
helm
relies
on
a
kubernetes
api,
and
so
we
can't
use
help.
F
We
we
used
to
provide
I
mean
helm,
would
have
kind
of
worked
before
and
now
it
doesn't
and
I
think
it
would
be
good
to
get
back
to
where
we
were,
by
hook
or
by
crook,
where
we
can,
at
least
with
a
straight
face,
say
with
minor
changes
to
your
tool.
You
can
make
it
to
poi
stuff
into
the
Federation
yeah.
B
I've
thought
a
lot
about
this,
so
I,
a
I
I
think
that
for
Federation,
ultimately
to
be
usable
for
people,
we
have
to
give
them
some
way
to
like.
Take
your
helmet
art,
Target,
Federation,
yeah.
There's
a
couple
different
ways.
We
could
think
about
doing
that,
and
they
might
be
things
that,
like
make
sense
to
to
do
multiple
things,
depending
on
the
use
case.
B
Federation
have
the
resources
transformed
for
you
on
the
fly
into
the
Federated
variants
and
maybe
also
get
like
placement
or
overrides
generated
for
you,
something
like
that,
but
I
I
think
I
think
Quentin
I'm
just
really
riffing
off
the
like
the
fundamental
idea
of
like
we
have
to
make
it
possible
to
with
as
little
work
as
possible
target
Federation
I'm
100%
on
board.
With
that.
Yes,.
F
And
maybe
once
we've
got
alpha
at
the
door,
we
should
lock
off
one
of
these
sessions
as
a
kind
of
brainstorming,
just
to
figure
out
what
the
options
are
and
pros
and
cons
and
get
the
process
done.
And
then
we
can
crank
out
of
maybe
a
design
duct
or
something
and
tackle
it
as
part
of
the
next
phase.
It
seems
like
one
of
the
bigger
features
that
we
will
want
to
have
in
soon
after
alpha
yeah.
B
F
I
mean
there's
one
there's
one
fairly
simple
universal
solution
that
may
be
relatively
little
work
and
lots
of
value,
which
is
just
to
essentially
take
the
v1
API,
promote
all
of
the
things
that
were
annotations
into
top-level
fields
and
create
a
CR
D
that
looks
like
that
and
then
and
and
all
of
those
top-level
fields
could
even
be
optional
in
the
sense.
So
a
tool
could
just
work
against
that
new
API
modulo.
You
know
CRD
kind
or
whatever
it
is
that'll
change,
but
but
all
the
fields
will
be
identical,
except
for
these
additional
optional
ones.
F
And
then,
if
those
tools
make
you
know
small
modifications
to
add
those
additional
fields,
then
then
everything
just
works.
You
know
under
the
hood.
We
then
translate
that
thing
into
all
of
the
various
pieces
of
Federation
B
to
bits
that
need
to
be
generated
to
make.
That
thing
happen,
I'm
not
saying
that's
the
only
way
or
the
best
way,
but
it
seems
like
a
pretty
SuperDuper,
easy
and
general-purpose
solution
without
having
to
build
something
specific
to
helm.
More
specific,
oh
yeah,.