►
From YouTube: Kubernetes Federation WG sync 20180425
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
A
Okay,
so
I
have
been
working
on
that
high
level
controller,
so
I
think
we
need
to
talk
about
this
in
the
same
same
two
weeks
ago
and
what
I
am
doing
is,
or
my
intention
was
to
write
a
high
level
controller
for
one
type,
which
sort
of
puts
same
or
similar
code
which
exists
in
v1.
Is
it
now
sort
of
puts
parity
over
there
and
same
functionality
can
be
done
using
the
new
API,
which
I
call
that
Africa
shed
using
API
and
what
it
does
is.
A
If
user
creates
this
replica
shuttling
resource
and
if
the
same
named
template,
Department
template
and
says,
then
it's
a
monitor
that,
because
sorry,
the
deployments
in
the
underlying
clusters
and
update
the
template
and
overrides
to
correct
or
though
the
whatever
shad
you
mean
stuff,
it
it
tends
to
do
and
the
implementation.
The
controller
implementation
that
I
have
done
right
now
is
sort
of
similar
to
same
controller
like
it's
not
very
different
from
seeing
control,
except
for
the
reconcile
and
less
interest
of
that.
A
It
needs
to
do
and
it's
specific
for
deployment
type
and
what
I
did
a
value
it
before
this
is.
There
is
an
auto-generated
scaffolding
for
controllers
in
the
case
ever
when
you
books
up
the
API,
so
I
did
try
and
evaluate
the
usage
of
that.
That
is
also
doable,
but
we
also
have
this
open
item
that
if
we
move
to
see
our
DS
or
not
so
in
that
case,
that's
a
folding
and
right
in
the
controller
that
they
might
not
be
very
super.
Useful
yeah.
B
B
B
A
A
A
My
intention
is
to
do
it
in
steps,
because,
if
I
try
to
put
say
a
kind
of
an
adapter
which
can
take
in
different
types
and
provides
an
interface
which
can
do
shuttling
with
the
link-
that's
we
can
say
all
of
it
in
one
PR
might
be
a
little.
You
know
too
much
to
the
view
one
too
much
to
do
so.
My
intention
is
to
do
it
themselves.
Yeah.
B
B
So
rather
than
like
worrying
about
replicas,
that's
or
deployments
for
the
simple
scheduling
case,
you
just
worry
about
replicas
count
as
a
field,
which
is
a
lot
easier
to
deal
with
I'm,
not
saying
approaching
that
now
that's
just
kind
of
like
ideas,
I'm,
like
you
know,
the
whole
adapter
concept
was
my
first
stab
at
moving
away
from
having
type
specific
controllers
and
moving
further
along.
The
path
has
been
kind
of
interesting
because
it
means,
though
you
don't
actually
have
to
write
code
for
many
cases
not
saying
get
away
from
it
entirely.
A
C
C
A
D
C
A
Yep
so
I
I
haven't
concluded
the
I
mean
the
thought
in
my
mind,
is
that
I'll
actually
be
using
the
integration
framework,
which
is
already
there
for
the
tests
and
I
haven't
started
with
the
tests
explicitly
for
this
particular
stuff.
So
the
currently,
what
I
have
implemented
is
the
controller
and
sort
of
manually
figuring
out
what
it's
doing,
the
right
thing
or
not?
But
yes,
in
the
same
TR
I'll,
follow
that
up
with
the
tests
and
using
the
integration
framework.
C
C
A
Write
about
it
so
when
I
said
the
reuse
sent
equation,
framework
I
mean
because
it
already
has
some
setup
which
it
can
do.
You
mean,
set
up
all
three
servers
and
provide
the
running
end
point
of
this.
The
clients
can
talk
to
that.
Much
I
can
do
you.
The
rest
of
the
stuff
for
would
be
accepted
from
the
front
which,
if
there
any
division,
is
a
problem.
I.
B
Can
I
read
information
from
the
underlying
clusters
is
one
part:
can
I
write
the
core
you
know:
can
I
write
the
resources,
the
same
controller
expects
and
then
the
part
of
actually
synchronizing
is
kind
of
that's
what
the
sync
controller
is
doing,
and
you
don't
really
have
to
exhaustively
test
that
you
know
the
context
of
scheduling.
You
need
to
make
sure
it
all
ties
together,
but
I
think
that
it
might
simplify
the
testing.
If
you
sort
of
break
into
the
piece.
B
D
You
know
this
is
a
federated
replica
set
or
deployment
or
whatever
it
is,
and
this
is
how
users
expected
to
behave,
and
then
we
have
tests
to
verify
that,
irrespective
of
you
know
how
the
underlying
pieces
interact
with
each
other
or
the
underlying
pieces
of
classes
or
code
work,
the
end
user
behaviors
should
be
as
per
the
spec.
Are
we
still
using
that
distinction
between
the
three
kinds
of
tests?
Is
that
terminology
that
I
just
described
accurate,
I.
B
D
Okay,
I
mean
I,
think
that's
fine.
As
long
as
we
have
the
concept
of
an
end-to-end
test
which
tests
you
know,
you
can
imagine
a
user
going
through
the
user
manual
and
saying,
oh,
if
I
create
one
of
these
things.
What
will
happen
is
that
you
know
it'll
end
up
in
the
underlying
clusters
and
if
one
of
the
clusters
dies,
you
know
this
will
happen
or
whatever
the
case
may
be,
and
and
have
a
test
that
essentially
codifies
that
and
verifies
that
it
does
happen
and.
B
Well,
I
think
that
I
would
say
for
the
most
part
like
we.
We
do
need
some
like
full
I'm
gonna
validate
everything
at
ago,
but
the
complexity
of
the
distributed
system
doesn't
suggest
doing
all
the
testing
at
that
level,
and,
ideally
you
you
know
we'd
be
able
to
do
here
or
cases
separate
from
an
end-to-end
test
like
the
corner
cases
would
be
evaluated
in
integration
or
where
you
could
really
target
just
the
failures,
because
you
don't
really
have
an
opportunity
to
do
effective
fault
injection
with
a
deployed
sort
of
scenario.
Does
that
make
sense.
D
Yes
and
no
I
guess
I
mean
I
personally
written
end
to
end
tests
that,
for
example,
simulate
packet
loss
between
clusters
or
packet
loss
between
nodes,
and
you
know
broken
network
interfaces
and
things
that
these
were
the
I
forget
what
they
were
called,
but
the
kubernetes.
You
know
failure
tests
and
it's
not
that
hard
and-
and
it
is
very
useful
and
also.
B
D
B
D
Manual
testing,
that's
for
sure,
I.
Think
unit
tests
are
great.
They're
typically
involved
a
lot
of
work
for
less
value,
in
the
sense
that
they
don't
actually
prove
the
system
works,
but
they
are
valuable.
Integration
tests
similarly,
are
typically
less
less
work
per
amount
of
code
coverage,
but
still
not
sufficient
and
end-to-end
tests.
B
D
By
all
means,
if,
if
the
integration,
sorry,
if
the
end-to-end
tests
can
like
leverage
tooling,
that
is
created
in
the
integration
framework,
that's
great,
you
know
the
ability
to
make
a
node
fail
or
do
this
or
do
that,
what
it,
whatever.
Let's
build
the
libraries
to
to
make
all
of
the
testing
easier
to
do
and
share
between
the
libraries,
if
necessary,.
B
Yeah
and
then
there
is
kind
of
an
effort
around
trying
to
centralize
development
of
that
stuff.
It's
kind
of
it's
still
pretty
soon,
but
the
hope
would
be
that
that
picks
up
and
we're
able
to
actually
pick
up
things
like
fault
injection
for
free,
rather
than
having
to
maintain
them
separately
from
what
the
rest
of
cube
does.
A
I
mean
we
could
skip
it.
Also,
yeah
I
mean
I'm
actually
trying
to
integrate
that
external
DNS.
The
same
thing
we
discussed
the
last
week
so
actually
before
that
I
was
just
trying
out
the
prototype
thing
and
I
found
like
a
stir
on
DNS,
is
kind
of
fusing
little
old
packages
and
I
stuck
with
the
dependency
management
of
the
different
imports
actually
which
it
is
using
so
I
think.
Okay,
there
are
alternatives
to
the
exactly
like.
A
My
main
concern
right
now
is
to
give
out
our
crude
implementation
of
it
as
soon
as
possible
and
give
out
a
demo
of
that.
But
to
do
that
there
is
a
code.
Enos
I
was
expecting
that
that
PR
got
stuck
in
external
DNS,
almost
like
I,
think
six
months
to
nine
months,
I
think
it
is
not
moving
ID,
so
I
think
I
got
stuck
over
there.
Actually,
so
the
other
one
is
like
we
could
do
that
mo
on
GCE
or
some
other
enrollment
like
that.
A
B
A
The
basic
idea
is
like:
okay:
I
did
one.
We
are
a
couple
of
days
back
so
that
basically
consolidates
of
the
load
balancer
and
whatever
information
required
to
program
the
DNS.
That's
a
data
type
which
Federation
okay,
so
an
external
component
can
make
use
of
that
particular
data
and
program,
the
DNS
in
whichever
cloud
power
that
they
want.
A
So
that's
why
we
need
to
provide
source
which
can
consume
this
data
so
that
particular
source
either
could
be
in
external
DNS
repo
or
could
be
within
Federation
or
could
be
outside
Federation
also
so,
but
it
is
pretty
much
integratable,
so
it
is
built
over
the
layers
right.
So
so
what
I
am
trying
to
do
right
now
is
try
to
implement
a
source
for
which
consumes
this
data
and
produces
the
DNS
records,
generate
DNS
records
and
feed
out
the
controller
in
external
DNS.
A
D
D
Sufficiently
well
supported
stroke
robust
if
we're
back
with
PRS
that
are
stuck
for
nine
months
and
whatever
I
get
a
bit
nervous
that
this
might
hold
up
our
work.
So
one
approach
I
can
think
of
is
to
continue
using
DNS
provider
that
we
have
right
now.
Will
the
DNS
provider
that
that
actually
supports
external
DNS
would
is
a
pretty
simple
exercise?
I
mean
I
built
one
of
these
things
before
in
a
weekend
actually,
and
then
we
can
basically
use
either.
D
A
A
B
Having
like
having
an
abstraction
layer
makes
sense,
so
it
is
part
of
all
I
would
caution
that
the
idea
that
we
have
something
that's
workable
today,
I
think
that's
a
bit
of
a
I
think
that's
optimistic,
because
some
of
those
providers
may
in
fact
work
but
the
reality
is:
there's
no
CI.
That's
validating
almonds.
B
At
least
non
upstream-
and
the
hope
would
be
is,
is
that's
the
problem
external
DNS
is
gonna
solve.
That's
gonna.
Take
the
maintenance
burden
off
of
us
rather
than
just
having
stuff
in
the
tree
that
we
don't
really
have
the
expertise
to
maintain
or
evaluate
on
the
regular
basis,
so
I'm
all
for
abstraction
layer,
I'm,
just
I,
still
think
that
external
DNS
is
something
we
want
to.
Try
that
with
you.
I.
D
B
B
So
whether
or
not
this
makes
sense,
but
my
hope
would
be-
is
that
the
existing
stuff
that
could
be
taken
from
v1
there
could
be
kind
of
a
shim
that
could
read
this
new
multi
cluster
DNS
resource.
So
there
is
kind
of
a
separation
of
concerns,
and
that
would
kind
of
allow
the
eventual
integration
with
external
enos
to
be
kind
of
in
the
model
that
we've
discussed
rather
than
kind
of
doing,
some
sort
of
something
that
diverges
from
that
doesn't
make
sense.
A
A
So
we
if
we
put
something
as
what
mojo
just
mentioned
before
this-
that
we
have
to
maintain
right
and
there's
no
mention,
we
don't
have
a
CI.
You
know
that
works
always
or
not
right
I.
My
solution
would
would
be
that
we
can
put
some
effort
pushing
that
PR
or
pushing
the
changes
which
we
might
be
beneficial
for
foundation
in
external
games
itself,
and
maybe
for
only
that
portion
of
changes
or
for
the
package
whatever
it
is,
we
can
actively
yeah
yeah
yeah.
We'll
definitely
try
to
do
that
particular
path.
A
A
B
I
would
suggest
doing
it
as
a
separate
repo
just
because
that
would
sort
of
have
the
same
model
as
external
DNS
and
just
build
an
image
that
the
controller
could
be
used
to
run.
This
would
be
the
same
model,
we're
just
gonna,
run
a
control
or
somewhere.
That's
gonna
consume
this
and
program.
The
DNS
know
and
they're
like.
If
and
when
external
DNS
becomes
a
thing
awesome
if
it
doesn't
having
it
out,
a
tree
kind
of
I
think
makes
sense
in
any
way.
D
D
Reach
the
image
yeah,
yeah,
I,
agree,
I,
agree
and,
and
to
be
honest
at
the
moment,
it's
it's
not
a
it's,
not
actually
an
image,
separate
process,
it's
it's
just
a
library,
so
you
just
call
the
library
you
link
it
into
your
thing
and
that
library
itself
could
be
exported
out
of
v1.
As
a
you
know,
whatever
they
exportable
library
is,
which
is
how
it's
done
at
the
moment.
You
just
import
that,
but.
B
That
would
that
would
involve
ven
during
v1
code
or
copying
a
huge
chunk
of
it,
and
then
my
suggestion
is.
It
would
be
lighter
weight
if
he
just
took
the
working
code
sort
of
had
an
entry
point,
so
it
could
run
like
independently
of
the
rest
of
it
and
just
produce
an
image
and
then
Federation
v2
could
simply
consume
that
image.
D
B
A
A
B
B
A
A
D
B
B
B
So
ever
that's
done,
I
mean
I
I'm,
not
I,
don't
think
it
matters
whether
it's
done
in
the
view
and
repo
or
in
a
totally
new
repo
I
mean
whatever
is
more
convenient
or
easier,
but
as
long
as
an
image
can
be
published
that
then
can
consume
the
resource
created
and
the
Federation
of
you
controller
and
then
program.
The
DNS
yeah.
C
If
we're
I'm
done
talking
about
that,
actually
I
wanted
to
go
back
to
discussing
the
way
or
sort
of
the
plans
to
integrate
different
controllers
in
the
Federation
B
to
repo
I.
Think
that
do
we
plan
on
releasing
like
all
these
controllers
as
part
of
one
executable
image
and
if
so
like
it
seems
that
we
worked
to
decompose
the
API
that
it
might
make
sense,
ideally
to
see
the
API
as
a
separate
repo
that
can
be
consumed
by
a
bunch
of
different
controllers
that
live
in
separate
repos.
Outside
of
that.
B
Of
no,
it
does
make
sense
in
in
the
near
term.
I'm
not
too
concerned
about
providing
the
separation
in
a
longer
term.
I
think
it's
necessary
to
put
externally
developed
controllers
in
the
same
footing
as
internally
developed
ones,
but
in
the
near
term
like
we
still
have
a
lot
of
risk
for
just
getting
like
the
primary
behavior
done.
We
don't
have
a
lot
of
external
contributors
clamoring
to
to
work
with
us,
so
I
think
it's
important,
but
I
don't
know
if
it's
a
near-term
priority
I.
D
B
B
C
Yeah
I
see
how
moving
the
C
IDs
could
definitely
make
that
this
sort
of
a
moot
point
I
just
want
to
make
sure
that
you
know.
Since
we've
worked
on
D
decomposing,
this
I
think
for
the
short
term,
it
makes
sense
to
to
bring
it
all
together
in
one
repo,
but
I
want
to
prevent
us
from
cornering
ourselves
in
to
where
we
support
one
controller
manager,
image
that
supports
all
controllers.
D
B
Having
a
vendor
a
whole
bunch
of
controller
code,
but
you
know
isn't
necessarily
relevant,
my
hope
would
be
that,
like
we're
using
API
server
builder
in
the
v2
repo
today,
when
we
get
to
see,
are
these
maybe
that's
the
inflection
point
where
we
actually
created
a
new
repo
that
has
the
generated
client,
because
you
still
generate
a
client
for
CR
DS
and
then
the
vici
repo
consumes
that
so
we
cut
over
kind
of
with
CR
DS
moving
API
out,
and
at
that
point
it
becomes
easier
for
external
developers
to
use
the
API
make
sense.
Yeah.
A
What
I
was
saying
is
that
as
an
interim,
it
probably,
and
if
that
seems
like-
and
this
need
whenever
it
seems
like
the
need,
it
probably
would
not
be
very
difficult
to
have
multiple
binaries
for
the
specific
controller
types
or
class
of
controllers
from
the
same
people
and
in
API
sever.
Also,
we
can
have
some
feature
gates
kind
of
a
thing
which
can
run
the
specific
API
or
a
kind
of
API,
which
probably
is
needed
for
that
particular
deployment.
I
think.
B
Definitely
doable
my
my
reservation
is
that
we
haven't
really
gotten
to
the
point
of
needing
shared
informers.
Yet,
but
I
can
imagine
not
being
a
concern
at
scale,
and
so
the
advantage
of
one
binary
is
that
you
can
share
informers
between
things.
I
mean
otherwise
have
like
their
own
caches
and
their
own
connections,
but
I
I'm.
Just
putting
that
out
there
and
then
saying
it's
the
reason
not
to
have
separate
binaries,
it's
just
something
that
you
think
is
there
yeah.
B
I
mean
the
cube
infrastructure
is
not
exactly
efficient
when
it
comes
to
sighs
yeah,
but
yeah
you're
I,
think
your
concern
is
I,
mean
I,
think
this
is
it's
hypothetical
now,
when
it
comes
time
to
actually
deploy
like
sorry
to
to
release
something
and
to
consider
the
implications
for
operationally.
Then
all
these
things
we'll
have
to
worry
about
them.
Right
now
and.
D
B
D
We're
looking
for
agenda
items,
I've
got
one
other
one
that
I
could
briefly
put
on
the
table.
So
I
had
a
sorry,
assuming
that
the
previous
conversation
has
finished.
It
seems
like
it
had
one
done:
yeah,
please
so
I
undertook
at
the
multi
cluster
sig
meeting
the
other
day
to
just
clean
up
some
of
the
slides
which
I
did
and
I
took
the
liberty
of
adding
some
proposed
sort
of
rough.
B
Don't
know
I'd
have
to
give
it
some
thought
as
to
whether,
like
we
were
gonna,
be
in
a
position
to
pga
by
the
end
of
the
year,
I
mean
I
I,
definitely
support
the
view
that
we
need
to
we
kind
of
have
to
catch
up
to
where
v1
was
in
terms
of
acceptance
and
people's
desire
to
use
it.
And
at
this
point
you
know.
B
B
C
B
B
Is
it
really
I
wouldn't
say
that
it's
great
like
it's?
It's
not
necessarily
much
like
different
than
v1,
but
I.
Don't
think
that's
sufficient
for
me
to
want
to
move
to
beta.
So
all
that
to
say
that
I
think,
like
the
next
few
weeks,
if
we
can
lay
out
like
all
the
work
items.
Well,
what
the
scope
is
all
the
work
items
required
and
then
sort
of
make
sure
that
we
can
fit
within
the
schedule
and.
D
C
B
This
kind
of
pushed
back
past
you
know
we
want
to
get
to
beta
first
and
in
the
process
of
doing
that.
I
think
we'll
have
a
better
sense
of
what
it's
going
to
take
to
get
you
to
GA
and
if
we've
laid
the
groundwork
like
good
ground.
We're
for
beta.
My
hope
would
be
that
GA
as
a
matter
of
incorporating
user
feedback,
you
know
putting
filling
in
the
gaps
that
we
might
have
missed
in
the
process
of
getting
to
beta
and
so
to
me,
it's
like
it's
an
irritant.
B
D
E
D
I
mean
I,
I,
guess
I'll
just
make
it
clear.
So
by
October
we
will
have
been
working
on
this
project
for
three
years
and
I.
Think
if
we
can't
produce
a
beta
version
by
then
we
need
to
question
the
feasible
ability
of
the
project,
so
so
yeah
I
would
encourage
us
to
push
pretty
hard
for
something
like
that.
If
we
can't
produce
a
beta
in
three
years,
then
we
have
a
bigger
problem.
I.
B
C
B
We
might
have
to
go
back
on
so
so
to
me,
it's
all
about
like
de-risking
tests,
and
so
instead
of
my
mind
like
getting
an
alpha,
getting
something
usable,
getting
an
alpha,
really
interest
like
people
who
are
very
invested
in
the
idea
of
this.
Can
they
can
start
really
pushing
on
it?
That's
to
me
is
the
first
priority
and
the
second
priority
is
making
sure
that
we
can
transition
with
their
feedback
to
beta
and
then
the
wider
pool
of
users
who
are
more
comfortable
dealing
with
something
that
maybe
is
a
bit
more
guaranteed.
B
So
to
me,
it's
like
it's
just
a
process,
I'm
happy
that
we're
kind
of
discussing
it
now
and
I'm
starting
to
think
about
it,
because
it's
gonna
be
a
long
road
and
we
really
gotta
get
started
so
I'm
kinda
I'm
with
you,
Quintin
I'm
I'm
as
usual
baby
bit
more
cautious
but
I'm
I
kind
of
want
to
be
traveling
this
road
just
as
badly
as
you
do.
I
think.
B
A
A
What
might
be
ramifications
of
not
meeting
like
we
define
around
June?
We
will
be
able
to
cut
out
some
alpha
around
September.
We
look
at
out
something
eita
for
at
least
some
of
the
features,
so
we
just
keep
these
timelines
and
whatever
features
we
are
able
to.
We
mention
them
as
alpha
or
beta,
or
we
are
talking
about
the
whole
project
for
alpha
and
beta
during
these
timelines
and
what
happens
if
we
don't
really
meet
those
timelines?
If
you're
talking
about
this
for
the
whole
project.
B
B
Mean
the
the
previous
approach
of
of
codifying
alpha
beta
is
like
evening
I
based
and
saying
oh
well,
employments
are
test
sets
for
thought,
I
mean
I,
mean
I,
guess,
I
think
it's
just
a
different
model.
Sorry
I'm
hoping
this
is
answering
a
question
I
think
like
in
the
road
map
on
the
slide
deck
we
have
like
simple
replication
across
clusters
is
like
one
item.
D
B
Kind
of
like
foundational
and
I
wouldn't
expect
that
change
like
that's.
That's
it's.
The
whole
thing
so
like
do
I
support
replication
of
kubernetes
types,
including
C,
RDS.
Yes,
no
mike
is
an
alphas.
It
beta,
that's
the
whole
thing.
The
higher
level
stuff
like
do
I
support
specialized
behavior
for
an
employment
sir
for
services
or
that
sort
of
thing,
I
think
that
could
be
more
on
it
like
a
feature.
A
Think
I
got
the
got.
The
just
well
I
mean
I.
Think
that's
reasonable.
Also,
so
we
we
mentioned
that
at
least
some
foundational
stuff
for
some
set
of
API.
We
will
strive
for
try
to
ensure
that
we
can
progress
them
to
a
beat
or
whatever
and
higher
level
features
which
are
bigger
or
something
like
that
can
take
their
own
path.
Revision,
yeah,.
B
And
I
guess
I
would
I
mean
in
the
same
way
that
we
discussed
yesterday
about
how
it's
not
really
about
the
you
know
deployments
are
all
for
a
beta
or
GA.
It's
one
of
the
feature
that
deployments
enable
the
concept
of
Federation
I,
think
I
think
there's
definitely
room
for
having
those
being
like
Alpha,
Theta
GA,
but
it
I
don't
know.
B
I've,
never
been
particularly
comfortable
with
how
cube
sort
of
imagined
self
the
beta
GA,
because
there's
confusion
about
like
the
API
types
and
then
the
underlying
controllers,
or
the
over
perching
controllers,
or
whatever
I
mean
if
they're
tied
at
the
hip
but
like
one,
has
to
be
totally
fixed
because
it's
that's
how
client
behavior
there's
like
the
client
interacts
with
the
API.
It
has
to
be
stable
to
some
degree,
but
then
the
controller
behavior.
You
know
it
has
to
have
functional.
B
You
know
guarantees,
but
it's
implementation
can
change,
it
can
do
more
things
and
it
doesn't
necessarily
imply,
like
unstable
state
1
alpha
beta
whatever
and
I
know.
This
is
kind
of
like
a
bit
meta,
but
do
you
think
that
it's
relevant
for
a
federation,
because
we're
we're
still
targeting
these
types,
we're
still
like
sitting
on
top
of
this
behavior,
and
so
it
influences
how
we
think
about
the
features
we
implement.
I
think.
B
C
Think
I
think
a
good
point
where
it
seems
like
the
the
versioning
of
the
kubernetes
api
sort
of
falls
on
that
alpha
beta
ga
a
versioning
scheme.
But
when
you
talk
about
controllers
and
what
features
they
enable
in
terms
of
those
api's
that
does
seem
like
there
could
be
some
variance
there
and
I'm
not
sure
that
Carreras
has
at
least
I'm
not
aware
of
a
different
version
for
feature.
Sets
that
incorporate
those
api's.
Okay,.
D
However
many
years
you
choose
to,
you
know,
set
as
the
scope
fool
for
this
project
and
hadn't
produced
a
beta
that
people
could
use
yet
I
would
defund
the
project.
So
I
think
that's
you
know.
Probably
the
biggest
risk
we
have
is
that
we
get
to
the
end
of
this
year
without
a
beta
and
the
companies
that
are
currently
funding.
The
development
decide
that
they
no
longer
want
to
find
something
that
is
permanently
in
pre
beta.
D
B
I
mean
I
think
that's
kind
of
like
it
gets
a
little
bit
into
sort
of
politics
into
my
mind
like
if
someone's,
actually
paying
attention
to
what's
going
on.
You
know
things
have
it
and
flow
at
this
point
like
were
in
pretty
serious
development
and
I,
don't
think
that's
gonna
shift,
and
so
I
mean
if
someone
wants
to
just
see
this
from
the
outside
and
just
consume
something
independently
of
interacting
with
the
community
or
vendors
like
well,
we're
Red,
Hat
or
anybody
else.
B
That's
like
heavily
involved,
like
I,
don't
know
what
to
say
like
that's,
not
really
how
open-source
works.
I
mean
I.
Get
you
totally
understand
you
like
people.
We
do
have
kind
of
a
credibility
problem
within
Federation
if
we
can't
deliver
things
that
people
find
useful
in
a
timely
fashion,
no
question
but
I
think
we're
kind
of
I.
Don't
think
it's
quite
as
clear
as
we
haven't
delivered
something
like
something
was
delivered.
You
know
people
have
actually
been
using
it.
Yeah
I
mean
we're
always
using
it.
E
B
It's
alpha
beta,
whatever
is,
is
more
like.
Do
we
have
an
active
community?
That's
going
to
continue
working
on
this
and
maintaining
it
so
that,
if
you
depend
on
it,
you're
not
self-supporting,
like
otherwise,
you
might
as
well
just
write.
It
yourself
feel
like
I'm,
you're,
fitting
wheels
again,
I'm.
Sorry,
I'm,
probably.
C
A
time
yeah,
we
have
like
two
minutes
left
I,
think
that
what
you
say
makes
sense.
I
thought
I
think
that
another
risk
that
we
face
not
just
meeting
the
versioning
deadlines,
but
also
competition
with
potential
a
lot
of
projects,
because
users
are
asking
for
these
types
of
features,
so,
whether
or
not
they
get
them
from
the
Federation
or
the
multi
cluster
sig
versus
either
developing
their
own
or
going
with
some
other
solution
out.
There
is
potentially
a
risk
where
another
project
could
end
up
getting
more
momentum
as
a
result.
B
I
apologize,
Quinton,
I'm
kind
of
I
have
whatever
my
opinions
are.
I
agree
with
you
that
we
just
need
to
get
something.
We
need
to
get
to
a
point
where
we
can
deliver
useful
software
into
the
hands
of
users
and
they
have
some
degree
of
confidence
that
it's
going
to
be
supportive
so
to
the
degree
that
we
agree
on
that,
like
let's,
let's
make
a
plan
and
make
sure
that
we
can
get
there
yeah.
D
B
So
maybe
we
can
devote
time
and
in
the
subsequent
weeks,
meetings
to
nailing
down
the
scoping
and
the
timing
of
this
stuff.
That's
unreasonable,
I
mean
as
well
as
actually
doing
the
work
to
make
it
happen,
but
but
actually
setting
milestones
and
things
like
that
I
think
it's
always
good
to
have
clarifying
goals.