►
From YouTube: Kubernetes SIG Federation 20170627
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
June
27
we
are
recording
this
call
as
we
do.
Every
week
today,
one
presentation
by
Jonah
Janus,
federated
DNS
before
we
get
started
at
I,
just
wanted
to
get
the
biggest
cake
general
update
as
to
what
we've
been
doing
last
two
weeks
once
at
Google,
every
so
often
Google
hosts
what
we
call
a
customer
advisory
board
and
we
have
customers
who've
been
in,
and
then
they
tell
us
what
problems
they're
trying
to
solve.
Neva
I'm
at
an
estate
harvest
would
be
their
users
of
kubernetes
in
this
case.
A
But
it's
not
unreasonable
for
users
to
think
that,
in
terms
of
multi
cluster
and
poor
Nettie's
they're
really
looking
to
this
sig
to
solve
whether
they're
issues-
and
it
seems
like
there's
a
correlation
to.
Let
me
say
Federation-
is
a
das.
Federation
API
is
ready
for
primetime
it's
on
there
and
they
interpret
it
as
multi.
A
But
I
think.
The
important
thing
to
remember
is
that,
while
we're
working
trying
to
make
Federation
better
there's
other
people
who
have
real
problems,
that
they
need
to
solve
and
I'm
not
multi,
cluster
setting
today
also
probably
have
people
in
them
in
the
field
who
help
customers
and
we
realize
they're
developing
their
own
cooling,
to
do
things
that
are
done
today
in
Federation,
but
the
users
ten
adopt
Federation
Venus
batteries
back
ready.
A
A
There's
a
lot
of
items
that
were
listed
even
last
week
and
are
listed
in
the
Federation
engineering
worksheet.
We
brought
a
lot
of
these
up,
but
I
think
what
was
the
prevalent
point
here
is.
A
sig
has
been
mostly
focused
on
I,
guess
being
the
bucket
of
KPIs
that
would
be
QAPI
related
and
integrating
that
in
Federation,
and
not
necessarily
addressing
more
of
the
foundational
points
that
really
have
to
be
solved
before
we
can
take
Federation
API
to
production.
I.
A
Think
some
of
these.
We
talked
about
their
issues
like
annotations,
but
there
are
much
bigger
problems
that
users
are
facing
today.
That
I
think
should
be
discussed
within
this
SIG,
and
these
are
not
necessarily
features
per
se,
but
they
can
be
reference
designs
so,
for
example,
in
the
area
of
authentication
and
authorization.
A
Don't
want
to
get
bogged
down
on
too
many
of
these
other
challenges.
I
think
there's
a
few
more
in
convention,
but
from
from
hero
and
I.
Think
the
other
point
is
what
what
do
we
want
to
do
as
Adam
say
gain
it?
It's
Federation
API
is
a
way
forward.
I
guess
we
want
to
know
if
we
can
capture
all
a
user
intent
of
what
people
want
to
do
with
with
federated
clusters
and
see,
maybe
is
a
different.
A
One
that
comes
to
mind
is
just
cross
cluster
service
discovery
which
in
itself
is
not,
is
not
necessarily
tie
the
Federation
API.
It's
a
valuable
piece
of
technology
that
only
comes
hidden
down
in
a
hood
generational
beyond
well,
another
deaf
will
get
I
mean
John's.
Work
on
federated
DNS
has
a
small
dependency
on
a
federated
API
controller,
but
adds
itself
as
a
solution.
It
seems
like
it
could
be
adopted
as
a
multi
cluster
solution
that
sits
in
Palo
was
with
federated
API.
A
So
as
a
safe
and
as
I
say,
that
has
a
name
sig
Federation
it
is
it
correct
for
us,
or
it
is
desirable
for
us
to
want
to
try
to
tackle
what
what
we
think
reflects.
Multi
clustered
use
cases
things
that
customers
are
trying
to
solve
today
so
coming
to
Google
because
we're
a
cloud
provider-
and
we
have
you-
know
technology
with
an
intense
little
balancer.
A
But
this
is
mostly
a
use
case.
Oriented
problems,
its
I
see
it
as
being
parallel
to
what
Federation
and
API
wants,
but
this
is
something
that
we
for
now
will
have
to
focus
on,
because
the
users
that
we
see
this
as
the
problems
they're
trying
to
solve,
there's
others
that
I
can
list.
But
I
guess
this
is
big
people.
A
The
overall
few
points
that
we
wanted
to
bring
to
the
stick,
which
would
be
I,
mean
we're
where
users
are
what
the
challenges
are
again.
I
hope
that
through
all
of
them,
we
already
discuss
some
of
them
and
maybe
kind
of
opening
and
open
it
up
to
the
state
to
see
what
we
want
to
solve
together
and
how
we
want
to
partition.
This
work.
B
All
right,
so
it
sounds
like
what
you're
saying
is
that
traditionally
and
I've
only
been
a
little
bit
involved
in
Federation,
so
I'm
trying
to
get
a
handle
on
everything
here.
But
it
sounds
like
what
you're
saying
is
traditionally
that
folks,
probably
under
the
formation
control
playing
the
cubes
bad
in
the
Associated
life
force
and
that
you're
thinking
that
this
thing
should
expand
to
general
multi
cluster
type
of
use.
Cases,
as
opposed
to
specifically
focusing
on
one
set
of
tooling.
C
A
I
think
there's
a
documentation
component
there,
yes,
but
there's
definitely
also
a
tooling
component,
mainly
because
we've
seen
cases
our
users
solve
Multi
cluster
solutions,
for
example
with
a
deployment,
but
they
may
not
want
to
have
nominated
control
or
do
this
in
the
background,
because
it's
very
delicate
operation
so
for
them
doing
a
deployment
over
multiple
cluster.
They
want
to
retain
more
color
for
ground
control
over
what
happens.
Yeah.
D
A
To
suggest
that
it's
kind
of
a
mixed
right,
it's
a
mix
of
maybe
some
of
the
operation,
can
be
done
in
a
controller.
Most
of
it
seems
to
be
in
a
foreground
tool
and
I.
Think
I
can
think
of
three
or
four
users
that
explicitly
told
us.
You
know
we're
we're
never
getting
in.
Let
it
controller
do
this
because
we
don't,
if
we
catch.
This
is
too
vital
of
an
operation
for
us.
God
doesn't.
C
It's
more
like
first
figure
out
what
the
best
practices
are
and
then,
if
they're,
you
know
if
it
makes
sense,
maybe
just
little
tooling
out
of
that,
but
maybe
I
think
you're
passively
saying
you
know
we
have
the
cart
in
front
of
the
horse
here,
where
we
squatted
with
the
tooling
and
then
try
to
construe
the
the
best
practices
and
we're
going
to
back
step
back
first
construe.
What
the
best
practices
are,
how
we
should
go
about
this
in
the
first
place
and
if
that
necessitates
tooling,
so
be
it.
D
I
think
it
might
be
worth
just
reiterating:
I
had
a
discussion
with
Kristian
and
Mikey
I'm,
not
sure
here
today,
a
couple
of
days
ago
on
this
general
topic
and
stood
of
think
step.
Looking
at
how
we
got
to
where
we
got
to,
and
then
some
of
the
stuff
requires
digging
into
the
details
of
these
particular
use.
D
I
actually
with
the
document,
and
that
was
my
proposal
and
then
some
people
pointed
out
to
me
that
this
introduces
a
you
know
fairly
substantial
learning
curve
for
people
who
now
have
to
learn
potentially
a
different
API
with
different
set
of
abstractions
and
objects
and
things
or
you
know
in
a
different
version
of
that,
the
same
named
abstractions.
But
which
actually
work
differently
than
the
way
they
do
in
kubernetes,
and
that
was
seen
as
potentially
when
I
say
work
differently
to
have
a
different.
D
They
have
the
same
name,
but
they
they
have
a
different,
actual
API,
so
different
set
of
fields,
different
status,
different
spec,
et
cetera
and
I.
Think
Clayton's
here
today.
I
think
he
was
one
of
the
main
proponents
who
successfully,
but
a
first
step
might
be
to
start
with.
The
kubernetes
are
abstractions,
because
we
kind
of
thought
about
them
fairly
carefully.
D
In
many
cases,
and
people
are
familiar
with
them,
and
companies
are
built
tooling,
around
them,
start
there
and
say
how
many
of
these
abstractions
make
sense
first
and
then
potentially
implement
them,
and
then
the
next
step
might
be
right.
Now,
where
do
we
put
whether
these
things
fall
short,
whether
they
not
do
what
people
actually
want
to
do
in
the
multi
cluster
scenario
and
then
augment
those
either
with
additional
abstractions
or
by
changing
those
abstractions
in
appropriate
ways
for
in
a
specialized
multi
cluster.
D
So
I
guess
what
I'm
doing
with
all
of
this
is
I.
Personally
think
it
wasn't
my
idea,
it
was
actually
mostly
Clayton's
idea,
but
I
think
it
actually
is
a
good
one
and
what
might
be
appropriate
at
this
point
is
just
take
these
set
of
problems
that
we've
come
across
through
this
Ezreal
board
and
figure
out
in
detail,
because
some
of
these
things
do
require
some
detail.
D
If
we
had
a
full
kubernetes
api
on
federation
today,
which
we
don't
have,
but
we're
fairly
substantially
their
modular,
you
know
stuff
being
alpha
and
not
stable
and
up
finished
yet,
but
other
than
that.
If
we
had
all
that,
how
would
people
address
these
particular
things
that
they're
trying
to
do
and
in
cases
where
there
is
absolutely
no
solution
to
trying
to
solve
what
is
the
most
appropriate
alternative?
Is
it
building
an
external
tool
outside
of
Federation?
Is
it
adding
a
new
pipe
API
type
to
the
Federation
API?
F
D
D
Anticipate
all
of
the
actual
use
cases
of
these
things,
because
there
were
many
and
many
people
had
different
ideas
of
how
they
would
use
Federation
I
think
that
original
document
that
I
wrote
outline
you
know
at
least
five
or
six
general
areas
where
the
stuff
is
is
potentially
useful
of
the
cloud
provided
a
case
as
hybrid
cloud
stuff.
There's
you
know,
many
you
know
is
a
highly
available.
It's
high
availability,
stuff,
there's
many
different
areas
and.
D
So
yeah
I
mean
I,
guess
we're
Christian
says:
we've
got
to
a
point
now,
where
you
know
the
implementation
we
have
so
far
is
is
not
necessarily
addressing
the
majority
of
in
this
case,
because
is,
is
not
an
unexpected
place
to
have
come
to
I.
Think
we
anticipated
getting
here.
The
question
is:
do
we
finish
the
kubernetes
api
and
get
everything,
but
that
is
in
that
api
to
a
point
of
being
GA
and
then
you
know,
go
and
extend
it
from
there
or
do
we
extend
it
before?
D
D
Think
there
are
some
challenges
associated
with
that,
but
but
we
can
certainly
try
and
solve
specific
problems
that
way
and
yeah
I
mean
the
decision
that
we
came
to
provisionally
a
few
quarters
ago
was
that
this
was
a
different
customer
advisory
board
was
that
people
would
like
us
to
EA
the
features
we
have
before
extending
them
much
further,
and
we
decided
to
do
that
and
those
were
largely
the
things
which
we
had
on
the
147
and
holistic
sin.
1.6
plans
and
I.
Think
many
of
them
go
GA
or
even
beta
for
that
matter.
D
A
D
Sure
not
at
the
fair
disagree
when
I
say
beta
or
GA
I
mean
that
the
API
is
and
the
product
I
mean.
If
the
API
is
are
not
stable,
then
we
can't
call
them
GA
you're
not
going
to
support
them
into
the
future
and
if
the
infrastructure
does
not
actually
like
work
properly
because
got
bugs
in
it
or
doesn't
walk
up
and
also
not
GA.
But
yes,
the
two
sets
of
problems
to
solve
totally
agreed
and
we
spend
a
lot
of
trouble.
D
You
know
stabilizing
some
of
the
implementations
in
the
past
six
to
nine
months
we
can
stabilize
from
of
the
api's
and
things
like
security
and
the
authentication
issues
that
done
and
and
I
think
we
you
know
we
didn't
get
as
far
as
some
of
those
things
as
we
would
have
liked
to
particularly
with
and
some
of
those
things
and
yeah
I
think
we
just
need
to
I
think
if
we
had
executed
all
the
things
that
we
plan
to
execute
six
months
ago,
we
would
possibly
be
less
concerned
than
we
are.
Maybe.
A
But
don't
you
think,
here's
there's
a
somewhat
of
a
mismatch
in
terms
of
people
want
to
want
to
consume
nuggets
of
what
is
implemented
in
Federation
and
the
way
that
we're
kind
of
developing
things
doesn't
give
them
that
opportunity
to
either.
You
know
you
can't
have
one
foot
in
one
foot
out:
it's
kind
of
you
embrace
the
model.
D
Not
quite
sure
what
my
opinion
on
that
is
quite
frankly
and
I
guess
we
could
I
mean
it's
always
being
ability
to
build
automation
outside
of
the
Federation
control
plan.
You
can
always,
you
know,
build
a
kind
of
a
cube
control
type
that
does
you
know
rolling,
update
across
multiple
clusters
or
somehow
programs,
your
DNS
to
do
a
bunch
of
things
across
clusters.
D
D
So
yeah
I
guess
I
guess
we
should
perhaps
go
into
the
concrete
cases
where
people
want
to
take
a
piece
of
what
Federation
can
do
and
use
it
without
actually
running
a
federation
control
plane.
But
that
might
be
a
useful
exercise
and
then
we
can
figure
out
whether
those
things
you
know
as
opposed
to
being
sort
of
fuzzy,
not
particularly
well
thought
out.
You
know,
aspirations
versus
them
being
you
know
concretely
useful
and
maybe
my
wedding
there
was
I
didn't
mean
to
imply
that
I'm
scared
skeptical
of
it
I.
Just
there
is
the
I.
A
D
So
maybe
that's
the
animatics
now's
the
right
time
to
do
it,
but
I
guess
we
could
we
could
dive
into
that
in
a
little
bit
more
detail
and
write
here
or
another
time,
but
I
mean
it
seems
to
me
that
you
have
cross
trust
of
service
discovery.
You
need
to
have
a
long-running
monitoring
service
that
will
figure
out
when
clusters
up
and
down
and
you
know,
meet
whatever
needs
to
be
read:
jigs,
either
a
load
balancer
or
a
DNS
server
to
address
those
things.
D
That's
kind
of
what
the
federated
service
controllers
does
one
group
and
it
does
that
on
the
basis
of
configuration
that
you
feed
it
by
acknowledging,
there's
not
entirely
clear
to
me
how
one
might
pull
that
usefulness
out
and
deploy
it
in
something
that
doesn't
start
looking
very
much
like
a
federated
service
controller
I.
Don't.
H
Think
it's
a
service
controller.
The
issue
I
think
it's
more,
that
the
current
implementation
relies
on
exposing
the
cube.
Api
is,
you
know,
via
the
Federation
control
plane,
and
we
run
into
the
IP
issues
of
versions,
queue
and
like
how
do
we
support
like
different
versions,
and
how
do
we
upgrade
and
how
do
we?
It's
like
it's
it's
more.
The
idea
of
just
running
a
controller
isn't
really
the
issue,
it's
more
like
how
do
people
access
that?
The
current
approach
is
it's
just
so
many
issues
that
we
haven't
addressed
yet.
I
Yeah
I
see
these
are
slick
to
different
issues.
One
is
the
Federation
API.
You
should
will
be
spiders
to
the
kubernetes
api
and
then
there's
the
Federation
architecture
and,
like
I,
said
the
architecture.
There
is
some
use
cases
where
having
a
long-running
controller
use,
the
Federation
architecture
does
make
sense,
but
the
AVH
actually
is
like.
We
are
not
in
a
good
position,
but
it
is
really
difficult.
So
I'd
leave
you.
There
are
two
different
problems:
yeah.
D
D
H
In
my
mind,
right,
I
think
that
a
question
I
mean
as
a
prototyping
exercise
I
think
using
the
cube.
Api
has
been
a
huge
win.
It
just
sidesteps.
All
of
the
like
supportability
issues
that
really
are
critical
for
bringing
something
to
customers
is
something
you
can
support
provider
the
product,
and
so
I
mean
we've
discussed
this
in
the
past.
I
think
it's
just.
This
is
the
time
where
we
actually
have
to.
We
have
to
solve
those
problems,
and
that
may
mean
that
exposing
committees,
API,
is,
is
not
really
a
viable
way
to
do
it.
G
G
D
B
B
Am
why
I'm
curtains
on
this
little
one
of
the
maintainer
x'
along
with
me,
given
one
of
the
primary
maintainer
zuv,
core
DMS
and
I
work
for
Infoblox
we
do
DNS
stuff
and
we're
Infoblox
is,
should
have
been
the
one
who
sponsored
the
integration
between
core
DNS
and
kubernetes
for
the
in
cluster
DNS,
not
not
the
not
the
with
the
Federation
control
plane,
but
for
replacing
the
standard
cube
DNS
with
audience.
B
I
also
am
working
internally,
Infoblox
I'm
leading
the
effort
where
we're
going
from
our
existing
SAS
control
plane,
which
is
mais
osa
marathon
and
moving
next
to
burn
Eddie's.
So
that's
how
I'm
involved
here
we're
not
using
Federation
yet
because
you
know,
as
we
all
know,
that's
not
quite
ready
yet,
but
it
is
something
we
want
to
want
to
look
at
in
the
future
and
so
as
the
maintainer
accordion.
As
one
of
our
goals
is
to
you
know,
make
it
me
useful
to
everybody,
and
we
thought
that
there
were
some
opportunities
within
the
Federation.
B
Let's
see
can
I
share
my
screen
here,
so
we
thought
there
were
some
opportunities
here
that
there
may
be
some
youth
cases
that
aren't
or
some
place
in
some
places
where
the
idea
of
publishing
cnames
to
external
public
dns
may
not
may
not
work
well
as
well
as
some
other
issues.
So
some
of
the
things
we
wanted
to
try
and
address,
and
it
and
I
think
Quinton
has
made
the
point
that
this
you
know
shouldn't
replace
the
existing
solution
and
I
agree.
B
There's
there's
cases
where
this
doesn't
work
well,
particularly
at
scale
and
in
certain
topologies,
where
DNS
runs
everywhere
on
every
host
in
the
cluster
and
I.
Think
openshift
does
that,
but
we
do
see
in
smaller
sense,
with
smaller
sets
of
clusters
that
this
may
may
provide
some
additional
opportunities
for
or
optimization.
B
So
a
little
picture
here
of
what
what
we're
talking
about
fundamentally
in
DNS
for
those
who
aren't
very
familiar
with
it,
there's
concept
of
a
zone
and
a
zone
transfer,
so
zone
is
just
domain
and
the
zone
transfer
is
a
way
for
a
secondary,
DNS
server
to
get
the
resource
records.
That
is
all
the
entries
that
are
part
of
the
zone
from
the
master
and
hold
on
to
that
and
serve
it
locally.
So
that
you,
you
know.
Typically
it's
used
in
NS
for
redundancy
or
just
for
reducing
latency.
By
bringing
something
closer.
B
B
Federated
service,
the
you
know,
the
local
cluster
DNS
would
do
what
it
does
today.
It
would
look
for
a
local
one
and
if
it
doesn't
find
one
rather
than
than
just
recursing
out
or
or
proxying
out
to
a
public
dns,
it
would
instead
look
at
the
list
of
clusters
which
have
that
running
in
a
federated
mode
across
all
the
clusters
apply
some
policy
and
then
choose
to
return.
One
of
these
the
IP
addresses
from
one
of
those
clusters.
B
B
However,
to
make
the
user
experience
really
really
simple
and
easy
and
slick,
then
that's
where
we
would
start
looking
at
essential
modifications
to
keeps
and
when
we
we
allow
to
fed
to
essentially
set
up
the
in
order
to
make
this
work,
you
need
you
need
to
expose
DNS,
which
most
people
don't
want
to
do
so
to
mitigate
that.
We
expose
DNS
only
with
G
RPC
connection,
one
of
the
things
that
the
core
DNS
does.
That
unusual
is
that
it
offers
a
way
to
get
DNS
records
via
TLS
GRC
RPC
over
TLS.
B
So
the
idea
would
be
that
when
we
expose
that
DNS
entry
of
the
DNS
service
rather
publicly,
we
do
it.
We
do
it
only
via
the
GNPC
over
TLS
connection,
with
some
sort
of
potentially
client
certificate
or
or
potentially
to
avoid
us
type
of
things.
You
could
put
network
either
put
policies
on
load
balancers.
B
Deployed
to
them,
and
and
of
course
we
need
to
expose
those
that
some
something
needs
to
expose
the
the
DNS
overjack
to
seeing
coordinator
at
all
process.
That's
where
we
would
see
something
being
done
in
that
Federation
control,
plane
or
to
be
fed
in
essential.
Essentially,
that's
really
the
the
primary
thing.
The
other
concerns
we
talked
about.
You
in
a
document
are,
of
course,
this
creates
a
mesh
and
any
mesh
as
an
N
square,
scalability
issue.
B
So
that's
where
larger
sets
of
clusters
can
become
an
issue,
and
at
that
point
you
either
need
to
rendezvous
somewhere
or
such
you
end
up,
looking
very
similar
to
what
we
have
today
in
the
in
the
Federation
controller,
maybe
with
some
slight
differences.
So
you
know
that
it's
a
question
of
whether
whether
we
want
to
pursue
this
or
not,
but
that's
really
it.
In
short,
I,
don't
know
people
have
questions
I.
I
Would
say
like
the
what
I
like
in
this
proposal
is
adding
the
nearness
concept
to
code
Els
and
codeine
is
getting
to
decide
which
clusters
should
it
out
to
rather
than
like
today,
the
service
controller
just
configured
all
DNS
providers,
but
I.
Also
in
this
distinct
design,
I
like
will
be
interested
in
s
and
cost
less
tidy.
Ns
are
separate
and
if
one
caches
and
it
doesn't
impact
the
other
one
which
we
lose
with
this
proposal.
B
Yes,
you
do
you,
do
you
end
up
tying
them
together,
but
basically,
if
it
was
to
DNS
credentials
here
in
our
internal
anyway,
so
you
know
that
steady
thing
Quinn
brought
up
about
potential
denial
of
service
there,
so
we
would
have
to
see
how
we
can
mitigate
the
number
of
ways
to
do
that,
but
there's
also
scenarios
that
are
purely
on-prem,
which
I
notice,
where
the
Eco
DNS
is
used.
Today
generally,
when
there's
not
a
cloud
DNS
provider
and
in
those
cases
that
wouldn't
really
be
such
an
issue
either.
So
it's
really
in
alternative.
I
D
Wanted
to
add
one
comment
which
I
didn't
actually
get
around
to
heading
to
the
document,
which
is
that
the
nearness
algorithm
stuff
could
could.
He
could
also
be
implemented
in
the
existing
federation
external
DNS
solution.
So
the
current
way
it's
implemented
was.
I
was
just
really
a
first
pass
at
getting
something
semi
sensible,
which
assumes
that
you
know
in
the
same
region
are
closer
than
clusters
in
different
regions,
etc.
D
But
there's
no
fundamental
reason
why
that
algorithm
could
not
be
made
more
sophisticated
and,
in
fact,
I
think
in
the
design
proposal.
I
wrote
that
it
might
be
a
good
idea
to
have
of
the
individual
clusters
in
between
clusters
and
use
that
to
inform
the
DNS
management
portion
of
the
federated
service
controller,
which
class
you
know
what
the
proximity
between
clusters
was
actually
in
terms
of
latency
behind,
with
whatever
there.
D
B
Think
she
actually
actually
brought
that
out
of
an
acumen,
so
there
are
existing
solutions,
aw
cool
it.
There
are
existing
GSLV
bits
of
global
server
load,
balancing
solutions
that
are
based
on
DNS,
the
creatively
master
and
actually
it
for
blocks.
We
do
it
on
our
product,
but
basically
you
know
that's
another
programmable
piece.
You
need
to
do
which
isn't
necessarily
a
big
deal,
but
the
idea
of
pulling
it
into
a.
B
Into
the
accordion
as
itself
us
is
that
we
can
like,
as
I,
go
to
these
external
policy
engines,
and
then
those
policy
engines
can
do
more
more
more
than
policies
than
necessarily
could
eat.
We
do
in
the
traditional
GSLV
type
of
thing,
so
I
know.
Turin
in
the
past
gave
a
demonstration
of
cluster
placement
using
the
open
policy
agent,
where
you
decided
say
that
you
need
to
place
these
things
in
E
in
the
EU
cluster.
B
You
can
do
the
same
thing:
sort
of
regulatory
playbooks
policies
where
your
dns
redirection
goes
through
that
same
set
of
policies,
and
then
it
will
only
select
clusters
that
are
that
are
compliant
with
whatever
you're
doing
I.
Think
that
would
be
pretty
pretty
much.
That
would
be
quite
difficult
to
do
in
the
existing
cloud.
Dns
providers
because
of
the
maybe
you
could
do
it,
setting
up
some
complex
set
of
Records
but
I
would
think
you
would
need
to
more
more.
B
D
I
think
I
mean
we
don't
need
to
go
into
detail,
I
think
I
think
it
could.
You
know
equally
easily
be
done
any
other
solution
and
and
I'm
not
to
be
clear,
I'm
not
actually
hanging
on
to
the
existing
solution,
just
because
it's
there
or
because
I
wrote
it,
but
but
I
think
the
fairly
obvious
downside
of
the
one
in
this
document.
D
The
proposed
solution
in
this
document
is
the
kind
of
catastrophic
failure
scenario
where
you
can
quite
easily
and
conceivably
take
down
the
internal
DNA,
so
all
of
the
clusters,
and
it's
weighing
that
up
against
the
benefits
that
it
provides,
which
you
know
and
how
many
of
those
benefits
could
not
be
provided
in
a
safer
way
to
be
clear.
I
don't
actually
have
any
objection
to
anybody
working
on
the
proposal
as
it
stands
at
the
moment
and
having
it
as
an
optional.
D
You
know
deployment
possibility,
but
but
I
do
think
that
that
the
downsides
are
very
substantial
and
I.
Think
I
added
those
to
the
comments
and
the
upsides
are
beneficial,
but
not
enough
to
justify
the
downsides,
particularly
if
we
were
to
add
those
benefits.
For
example,
you
know
improving
the
nearness
algorithm
to.
D
B
B
I
B
Yes,
unless
we
decide
to
make
it
an
option,
that's
sort
of
simple,
although
I
did
see
there
was
one
there
was
one
PR
being
debated
about
whether
DNS
provider
is
mandatory
or
not.
So
probably,
if
you
were
running
this,
you
wouldn't
want
to
run
a
DNS
provider
what
other
than
that
very
minor.
Yet
you
wouldn't
necessarily
need
any
changes,
although
to
make
it
really
convenient,
it
would
be
nice
to
have
some
I
mean
I
think
to
the
earlier
discussion.
A
I,
just
like
I
go,
there
was
three
or
four
different
concerns
about
I.
Think
black,
the
closure
based
on
today's
discussion
and
where
the,
where
the
signal
wants
to
go
from
here
so
again
in
terms
of
API,
we've
got
a
little
bit
of
input
from
Meru
this
time
more
very
good
times.
Certainly
there's
difference.
A
A
A
It's
more
of
a
long-term
kind
of
thing,
so
I
guess
what
I
propose
is,
or
maybe
it's
just
an
ask,
is
if
the
despite
the
name
sig
Federation,
do
we
want
to
take
up
multi
cluster
discussions
if
they
come
under
the
under
the
form
of
tooling
or
any
other
form?
That
is
not
strictly
Federation
API
and
if
so,
are
we
going
to
make
room
for
them
in
any
agenda.
H
I'm,
definitely
in
supportive
sort
of
expanding
the
scope
to
be
multi
cluster
I
think.
The
only
concern
is
that
it's
not
just
enough
to
say
we're
going
to
talk
about
this,
so
we're
going
to
you
know
have
room
for
us,
I.
Think
it's
more
that
we've
been
very
focused
on
kind
of
federating
applications.
It's
kind
of
out
of
you
like
me
to
the
existing
solution,
and
that's
been
that's
like
in
itself,
is
like
a
huge
mission
and
so
we're
going
to
expand
the
scope
we
have
to
find
a
way
to.
H
C
Just
to
follow
on
to
that
Maru
I
mean
it
sounds
kind
of
like
what's
being
proposes
right
now.
Federation
is
a
very
prescriptive
approach
to
solving
a
multi
cluster
issue.
So
if
we
sort
of
be
less
prescriptive
and
say,
look
at
this
Multi
cluster
and
work
for
iteration,
but
at
the
same
time
reduce
the
the
types
of
use
cases
we're
going
after
is
that
sort
of
what's
being
proposed.
I.
H
And
I
guess
it
maybe
is
just
ties
into
prioritization.
I
mean
we're
still
going
to
have
any
successful
software
effort
always
has
an
endless
amount
of
backlog
and
what
you
actually
accomplish
is
based
on
which
things
bubble
to
the
top
and
actually
get
the
attention,
so
I
think,
as
the
sig
tries
to
approach
other
use
cases
that
it
hasn't
coaching
in
the
past.
H
So
it's
just
like
if
people
are
interested
in
working
on
those,
that's
probably
likely
to
take
precedence
over
anything
else,
it's
just
I
mean
this
does
represent
like
a
fairly
substantial
shift
and
I
mean
what
are
we
going
to
do
for
the
existing
stuff?
Are
we
going
to
focus
on
other
things,
the
exclusion
of
that
we
ever
going
to
get
to
GA
on
some
of
the
stuff?
Like
we
have
a
lot
of
discussion,
I
mean
this
is
kind
of
like
a
general
strategy.
G
G
With
this
in
mind,
we
kind
of
all
need
to
assess
like
a
lot
of
these
cases,
that
are
a
concern
like
what
I
was
in
particularly
I
think
individually
separately.
It's
good
to
think
about
these
things
too,
rather
than
just
in
a
group,
and
maybe
we
come
back
next
week,
we'll
have
more
fully
formed
out
ideas.
It's
thought
more
about
this
sort
of
let
this
sit
and
look
at
these
kids
and
we've
been
taught
about
Federation.
G
In
this
context,
I
think
it
will
become
more
obvious
like
how
this
will
be
divided
up
and
like
what
sort
of
actual
to
make
structural
and
mandate
and
organizational
changes
are
necessary
to
help
us
get
that
done.
But
I
know
this
still
feels
kind
of
to
new
to
really
make
big
decisions
like
that
bond.
H
I
mean
it's
not
a
matter
of
making
big
decisions
so
much
as
it's
having
a
conversation
and
sort
of
getting
to
consensus
on
what
we're
trying
to
accomplish,
and
then
we
can
focus
on
how
we
you
know,
do
that,
but
I'm
I
really
don't
think
that's
something
we
can
accomplish
with
an
hour
a
week.
I
agree
that
there
has
to
be.
H
You
know
individual
sort
of
contemplation
of
these
issues,
but
there's
no
way
we're
going
to
evolve
from
what
we
have
today
to
sort
of
the
future
we're
proposing
based
on
just
random
conversation
like
there
has
to
be
a
really
focused
effort.
I
mean
an
ideal
case.
We
would
actually
have
people
in
the
room.
You
know
for
a
day
or
two
or
more
and
Mike
hashing
the
stuff
out.
That
could
be
logistically
difficult
to
arrange,
I
understand
that,
but
in
the
absence
of
that,
maybe
we
need
to
have
other
means
of
having
focused
discussion.
I.
A
H
I
mean
I
think
that
some
of
those
those
issues
that
I
sort
of
put
on
1a
were
kind
of
presume
business
as
usual
and
I'm,
not
sure
that's
necessarily
the
case,
so
I'm
yeah,
like
I,
agree
with
the
staffing
issues
I.
If
we're
going
to
scale
this
down
and
we're
going
to
sort
of
have
a
more
focused
effort
that
actually
can
deliver
like
supportable
features
or
capabilities,
then
there
needs
to
be
like
determination
of
what
the
most
important
things
that
we
could
I
mean.
What
are
the
most
important
things?
H
How
do
we
break
them
up
into
pieces
that
are
actually
going
to
be
like
deliverable
in
1/8
or
1/9
or
wherever,
and
how
we're
going
to
resource
that,
and
also
I
mean
the
list
of
priorities,
doesn't
really
include
today.
Any
of
the
like
the
work
to
break
things
out,
so
the
customers
can
consume
them
without
too
buying
into
the
control
plane
of
this
is
this
today.
So
that's
a
whole
other
category
of
work
we
haven't
even
touched
on
so
like
I,
said
I.
Think
like
having
hard
conversations
about
like
what
can
we
reasonably
achieve?
H
What
we
want
to
achieve?
Look
at
what
of
what
we
want
to
achieve
can
be
reasonably
expected
to
do
in
1/8,
and
how
do
we
do
that,
like,
like
I
said
this
is
like
I
think
we
were
talking
internally
about
this
at
Red,
Hat
and
Steve
wad
mentioned
that,
like
sig
storage
was
having
issues
last
year
around
my
focus,
my
delivering
in
a
timely
fashion
and
maybe
I
think
we
just
had
like
an
intensive
like
on
site
to
try
to
nail
things
down,
maybe
see
you
can
provide
more
detail
about
that.
Yeah.
C
Yeah
sure
so,
basically
in
Asad,
Ali,
tamaak
and
myself
and
a
few
other
folks,
we
basically
just
got
in
a
room,
and
the
first
thing
we
did
is
talk
about
use
cases.
And
then
you
know
the
high
level
like.
What
are
we
trying
to
do?
Okay,
we're
trying
to
do
multi
cluster,
a
single
cloud
provider
great?
What
does
that
actually
mean
like
what
are
the
limitations?
What
works,
what
doesn't
lay
it
out
and
from
there
figure
out
what
our
strategic
approaches.
H
So
I'm
I
mean
we're
kind
of
at
a
time
for
this
meeting.
So
I'm
not
really
expecting,
is
to
come
to
some
sort
of
conclusion.
But
I
do
you
think
we
should
probably
follow
up
like
this
week,
because
I
guess
I
like
in
my
mind,
you
know
any
sort
of
energy
or
enthusiasm
for
continuing
to
work
on
the
existing
solution?
Is
it's
pretty
greatly
diminished
if
I'm
uncertain
as
to
whether
it's
going
to
be
viable
or
whether
you
know
the
effort
put
in,
is
going
to
be
useful
to
anybody?
H
H
But
I
think
like
we're
in
this
position
where
okay
we're
going
to
change
direction
and
now
you
know
we're
kind
of
stalled,
and
now
we
have
to
actually
map
out
a
new
direction,
so
we
can
start
moving
in
that
way.
So
sooner
we
do
it
the
better.
The
longer
we
wait
like
the
worse,
it
is
for
everybody
involved.
I.
Think
so,
can
we
mean
I,
don't
know
I
mean
I,
don't
really
to
me.
H
It's
not
really
bit
of
a
precedent,
I've
involved
in
cubed
that
long,
maybe
sig
storage
is
the
only
example
we
have,
but
can
we
maybe,
on
the
mailing
list,
figure
out
another
time
to
me
and
just
map
out
a
strategy
for
like
discussing
this
and
it
doesn't
have
to
be
solved
the
world.
It
just
has
to
be.
What
concrete
steps
can
we
take
whether
it
be
you
know,
series
of
you
know
video
conferences
or
something
just
to
have
start
walking
in
this
new
direction,
with
consensus
from
the
stakeholders
involved.
A
A
Yeah
I
mean
I
think
the
other
thing
is
having.
Maybe
people
from
other
states
join
would
be
helpful,
because
every
time
you
try
to
make
progress
on
especially
things
that
have
to
do
with
identities,
policy.
These
are
hard
problems
and
I
never
mind
the
back
lobbying
the
foundational
issues.
These
are
things
that,
whether
your
Federation
API
or
not,
if
you
have
a
tenable,
multi
collector
strategy,
you
have
to
tackle
them
so
and
now
they
don't
show
up
pretty
much
anywhere
on
the
spreadsheet.
So
I'd
like
those
to
be
part
of
discussion,
yeah.
H
Well
so
action
in
Christian
can.
Can
you
please
like
work
on
scheduling
another
meeting
and
just
get
consensus
on
the
sig
as
to
when
it
should
be
and
then
try
to
sort
of
nail
down
what
we
actually
want
to
accomplish
because
I
mean
there's
lots
of
stuff
to
talk
about,
and
you
know,
there's
a
certain
amount
of
emotion
around
it.
I'm
Sheriff
and
people
have
been
participating,
but
we
really
need
to
focus
and
just
make
decisions
on
what
next
steps
to
take.
H
C
I'm
clear
on
the
minors
whether
we
should
be
working
on
the
stuff.
That's
currently
in
fight
I
mean
just
as
an
example
with
six
storage
right.
We
were
locked
in
a
room
for
four
days,
not
going
in
till
6:00
p.m.
to
sort
the
out
bite
it.
So
it's
you
know,
one
one-hour
meeting
is
not
really
going
to
help
much
so
yeah.