►
From YouTube: Kubernetes SIG CLI 20171108
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
C
C
First
I
can
give
some
status
update
for
the
declarative
application
management,
so
me
and
Jeff
is
working
on
break
some
that
dependency
of
cube
control.
So
it
also
a
declarative
application
management
kind,
but
can't
use
some
Q
control
factory
to
build
it
with
good
dependency.
So
I'm
we
are
trying
to
make
that
possible.
D
D
E
F
A
B
B
It
just
found
two
constraints,
really
client
go
and
kubernetes,
but
I
had
to
override
a
couple
of
things.
One
is
docker
distribution,
we're
actually
using
something
that's
later
than
the
previous
release
version
in
coop
control
and
so
I
needed
to
say,
grab
it
off
the
master.
In
order
to
do
this
because
we're
using
unreleased
features
and
then
there's
Black
Friday
to
version
2
was
out
and
we're
using
the
one
dot,
X
compatible
version
and
for
some
reason,
DEP
didn't
pick
that
up
so
I
gave
it
the
override
suggestion
there
was
that.
B
But
what
that
didn't
deal
with
was
the
dependencies
that
are
all
in
staging,
and
so
what
I
ended
up
doing
was
a
make
file
that
actually,
once
it
figures
them
out,
then
I
actually
trashed
the
versions
that
are
there
and
then
I
copied
them
out
of
kubernetes
staging
into
the
right
space.
And
if
you
don't
do
this,
you
actually
find
minor
compatible
changes
between
the
latest
release
versions
that
DEP
will
pick
up
and
what
you
actually
have.
B
There's
some
interface
changes,
and
things
like
this
and
this
just
copies
them
out
of
kubernetes
and
sticks
them
in
here.
This
is
actually
the
same
code
that
helm
uses
when
they
need
to
pick
releases,
although
they
do
pin
to
a
version
of
kubernetes
and
so
they're
copying
the
version
out
of
kubernetes
in
rather
than
off
the
tip
of
master,
which
is
what
I
did
here
and
then
what
that
ended
up
doing
is
it
gave
me
something
buildable
and
then
the
next
thing
I
said
was
well.
B
Most
of
coop
control
is
actually
in
the
package
directory,
so
I
actually
took
the
package
kubernetes
kubernetes
package,
coop
control
and
moved
it
into
here
in
the
package
directory
and
I,
had
to
do
a
little
bit
of
renaming
because
it's
going
from
coop
control
to
package,
and
so
I
told
with
rename
a
little
bit
to
make
it
work
and
I
made
the
whole
thing
buildable,
and
so
it's
got
all
of
that
they're.
The
one
gotcha
is
most
of
the
tests.
B
Don't
pass
for
a
huge
number
of
tests,
don't
pass
because
they're
referencing
things
in
the
examples
directory
the
open,
API,
specs
and
things
of
that
nature
and
I
haven't
actually
decided.
Do
I
need
to
just
link
to
them
in
the
vendor
directory.
Do
we
keep
a
copy
here?
Did
they
go
somewhere
else?
I
didn't
know
the
answer
to
that
I
fixed
a
couple
of
them
by
just
pointing
it
to
the
location
in
the
vendor
directory
and
the
test
started
passing
when
I
said.
Okay,
this
configuration
files
over
there.
B
Those
started
passing
so
I
just
proved
that
I
could
do
it,
but
I
wasn't
sure
on
what
the
right
path
should
be,
but
it's
buildable,
it
actually
works.
It's
just
there's
some
test
issues
because
it
does
kubernetes.
It's
also
worth
doing.
Go
install
if
you're
gonna
start
running
several
commands.
So
that
way
it
keeps
pre-built
versions
of
the
packages
and
doesn't
have
to
rebuild
them
every
time,
but
other
than
that
it.
It
worked
as
a
general
path
to
break
it
out,
and
it
even
lets
me
see
things
like.
D
B
D
Because
I
know
the
Service
Catalog
ran
into
parts
of
this,
although
they're
trying
to
depend
on
the
OU
libraries
OpenShift
and
there's
a
couple
people
on
stage,
API
machinery
who've
been
doing
a
lot
of
this
recently
for
other
components.
I
you
may
want
to.
You
may
want
to
send
a
general
query
for
anybody
else
who,
during
Kate's
and
asked
for
support,
because
I,
don't
think
I,
don't
think
it
should
necessarily
be
on
your
head
to
support
a
particular
way
of
enduring
Kate's.
D
B
In
fact,
that's
one
of
the
problems
I
mean
a
lot
of
this
would
would
go
away
of
being
an
issue
of
even
having
to
vendor
Kate's
even
for
helm
in
others.
If
the
upstreams
were
synced
to
the
right
place,
because
then
you
could
just
say
you
know
get
it
from
Kate
estádio
slash
API
machinery
rather
than
having
to
copy
it
out.
That
removes
a
lot
of
it
for
some
projects.
That
means
they
don't
have
to
vendor
Kate's
anymore
for
coop
control,
there's.
B
D
Others
I
thought
metric
server
and
autoscaler
were
both
importing
various
Kate's
dependencies,
but
I
have
to
double-check
they've,
been
running
a
little
bit
longer
and
they
may
have
heard,
but
a
lot
of
it's
the
same
kinds
of
things
that
keep
control
also
ends
up
touching,
which
is
common.
This
logic
code
that
isn't
in
a
isn't,
moved
out
of
Kate's
into
a
core
repository
or
app
yeah,
posit
or
yet,
and
this.
B
E
So
so,
let's
say
that
to
develop
this
and
sort
of
start
making
progress
to
start
snipping
those
right
would
the
action
then
be.
We
would
try
and
reduce
that
list.
That's
it
out
of
out
of
the
Kate's
style,
kubernetes
repo
out
of
the
mono
repo
move,
those
into
separable,
repos
and
then
sort
of
you
know
change
the
dependencies
to
those.
E
Is
that
the
idea,
but
the
problems
that
there's
this
web
of
these
things
right,
and
so
they
have
to
both
move
something
out,
but
then
go
through
and
update
the
the
the
kubernetes
vendored
version,
so
that
'even
they
actually
refer
to
the
same
types
right
diamond
problem,
and
now
we've
actually
gone
past
semantic
versioning
versioning
when
importing
kubernetes,
because
we
have
to
now
start
taking
an
explicit
version
of
kubernetes.
So
it
seems
like
the
dependency
dance'
gets
really
complicated.
E
B
E
To
him
like
last
week
about
some
of
this
stuff
and
he's
trying
to
figure
out
sort
of
what
is
it
going
to
take
to
sort
of
get
kubernetes
on
the
debt,
bandwagon
and
and
one
of
the
one
of
the
ideas
there
and
I
don't
know
if
this
is
insane
or
not
is
I?
Think
one
of
the
problems
is
that
for
released
version,
semantic
versioning
with
it's
very
linear
type
of
stuff
works
great.
E
When
you're
in
active
development
coordinating
between
you
know
dependencies
across
multiple
things,
it's
like
things,
things
aren't
as
linearizable
as
what
you
get
with
official
releases,
and
so
one
idea
and
and
that
I
floated
with
Sam
at
the
time,
was
what,
if
Depp
supported,
not
only
specifying
dependencies
based
on
versions
but
also
say,
okay
head,
but
including
this
tag
where
we
actually
have.
You
know
like
when
you,
when
you
commit
a
version,
you
say
this
net,
this
version
now
has
you
know
when
you
submit
a
PR?
E
This
version
now
has
feature
name
2x
and
then,
when
you
specify
two
pens
you
say:
I
need
something
that
has
at
least
feature
X,
because
then
you
can
actually
the
couple
sort
of
like
the
dependency
tree
based
on
on
when
things
get
in
versus
some
sort
of
strict
versioning
right.
So
I
I,
don't
know
I'm
just
throwing
some
ideas
around
there,
because
I
think
right
now.
This
is
great,
but
then
I'm
like
I'm,
trying
to
imagine
how
we
move
this
forward
right.
E
B
And
and
some
of
this
you
know
I
think
you're
right
is
unwinding
it,
and
how
do
you
do
it
when
you
have
to
unwind
things,
one
of
the
things
that
I
have
learned
from
trying
to
help
unwind
some
other
monoliths
in
the
past?
Is
it
forces
you
to
kind
of
take
a
package
and
put
clear
interfaces
on
it
and
you
treat
it
as
its
own
thing,
and
so
it
actually
makes
the
problem
bite-sized
with
individual
teams
and,
being
you
know,
when
you
actually
have
to
take
a
whole
featuring,
you
might
have
a
few.
B
How
much
of
this
belongs
in
Kas
dot,
io,
/
API,
and
if
those
things
move
over
there,
you
know,
and
then
you
start
saying:
okay,
we're
going
to
make
the
API
we're
gonna,
add
these
new
things
to
the
API
and
then
these
other
folks
are
gonna,
go
implement
it.
Maybe
that
is
alpha
or
beta
or
something
like
that
using
pre-release
tags,
and
then
there
might
be
a
strategy
there
to
start
making
some
of
this
work
and
breaking
it
up
so
did.
A
G
G
What
we
want
to
do,
the
general
approach
is
make
clearly
understand
what
parts
of
kubernetes
are
intended
to
be
vendored
to
the
outside.
What
parts
of
the
core
are
intended
to
be
vendor
to
the
outside
and
then
write
hoop
cuddle
in
such
a
way
that
it
change
coop
cuddles,
so
that
it
only
depends
on
those
things
that
are
intended
to
be
vendored,
then
kukulkan
I
have
to
close
the
door.
G
Pedal
can
live
in
its
own
repo
and
it's
going
to
depend
on
things
that
any
other
client
would
depend
on
like
client
go
and
API,
and
it's
gonna
share
code.
We
have
a
lot
of
code
that
comprenez
uses
that
the
core
or
sorry
that
coop
could
use
that
the
core
also
uses.
We
have
to
put
that
into
another
of
us
story.
Unfortunately,
because
you,
when
you
when
you,
when
you
want
to
split
something
out
into
another
repo
you're
gonna,
have
to
have
at
least
another
one.
G
A
third
one
that
has
the
the
code
that
is
shared
and
that's
called
common.
We
also
have
a
utils
director,
utils
repo,
that
has
some
basic
functionality
stuff
that
any
user
of
go
might
be
interested
in
using.
So
the
general
idea
again
is
first
break
bad
dependencies.
We
don't
have
to
move
any
code
to
break
the
bad
dependencies.
We
just
have
to
clean
up
those
dependencies
and
one
SKU
cuddle
depends
in
a
sane
way
on
packages
that
are
intended
to
be
rendered.
G
G
E
Okay,
so
the
the
the
question
in
my
mind,
though,
is
that,
like
there's
this
assumption
here,
that
if
we,
if
we
pull
out
a
package-
and
we
put
it
in
its
own
repo
or
we
put
it
in
one
of
these
common
repos-
then
it's
frozen
in
time
and
it's
done
and
it's
good
and
then
we
move
on
I.
Think
one
of
the
I
don't
think
that's
an
assumption
at
all.
It's
not
for
that's,
that's
not
the
truth!
So
now
it's
like
okay.
B
G
E
B
E
Whereas,
like
you
know
these
packages
as
we're
refactoring,
there's
a
constant
churn
rate
along
with
these
things
and
right
now,
the
tools
that
we
have
for
managing
dependencies
and
managing
PRS
are
not
built
for
Chernykh
Ross
repos
right
and
that's
why
we're
doing
the
staging
thing,
because
it
lets
us
do
one
PR
that
updates
two
repos
at
once
right,
but
that
staging
thing
doesn't
work
across
repos
either.
Currently,
so
that's
like
a
lot
of
the
hacks
that
Matt
had
to
make
right
all.
G
B
Question,
though,
on
this,
why
do
we
have
such
high
churn
in
some
of
these
packages
right?
Why
are
some
of
these
things
iterating
at
such
a
high
rate
and
not
driving
towards
stability?
I
know:
we've
had
things
like
stability
releases,
we're
driving
towards
things
of
that
nature.
So
why
is
there
such
a
high
churn
in
some
of
these
things,
especially
the
things
that
Kuk
control
would
need
things
like
API
definitions?
B
D
We
are
willing
to
just
do
two
different
things
in
two
different
ways
in
two
different
repos
and
so
short
of
that
every
single
one
of
these
is
like
the
refactor
to
gradually
split
them
out,
and
so,
if,
if
somebody
wanted
to
today
copy
the
cube
repo
and
totally
rename
everything
and
drift
and
then
cut
its
dependencies,
we
would
not
be
having
high
rate
of
change
core
libraries,
because
we
would
simply
stop
trying
to
share
those.
And
that's
the
that's.
The
real
joy
well.
A
D
E
Yet
right,
I
mean
like
going
from
four
to
five
with
client
go
like
I
think
it
would
be
an
interesting
retrospective
to
look
at
all
the
breaking
changes
between
v4
and
five
of
client
go
and
figure
out
whether
we
really
needed
to
do
that
stuff,
because
I
think,
like
some
of
that,
is
just
gilding
the
lily.
It's
a
very
ugly
lily,
but
some
of
it
I
mean
I.
Think
it's
just
sort
of
cleanup
and
refactoring,
but
because
it
was
easy
to
do
sort
of
like
Kylie
couple
changes
based
on
the
staging
directory.
I
think
I.
D
Drivers
was
reused
in
things
like
the
downstream
service
catalog
like
it,
it
was
kind
of
like
a
you
could
spend
two
years
without
changing
anything
and
we
wouldn't
be
able
to
get
to
things
like
Service,
Catalog
and
metrics
server
and
some
of
the
auditory
or
we
refactor
to
solve
the
same
problems
and
keep
everything
at
this.
It's
basically
like
a
trade-off
of
time
versus
right.
E
D
Clank
Oh
is
none
on
to
third
parties
to
truly
not
expect
to
get
broken
because
it's
being
used
it
has.
It
has
all
the
same
use
cases
that
it
did
before
it
was
split
out
like
nobody's
ever
gone
and
said
we're
gonna
not
support
these
use
cases
by
copying
this
code
and
just
let
it
drift.
We
literally
said
we're
going
to
support
everything
we've
always
supported
before,
and
the
only
way
to
do
that
was
to
do
the
kind
of
ugly
vectors
so.
D
D
It
does
we
split
API
out
into
its
own,
so
that
other
projects
could
depend
on
it
because
we
hit
the
diamond
tree,
because
you
can't
compile
the
same
binder.
That
has
two
different
versions
of
the
API
and
then
there's
no
internal
types
in
it,
which
is
the
big
problem,
because
we
don't
wanna
be
exposed
to
certain
types
in
turn.
Types
I
would.
A
D
D
E
I'm
not
worried
about,
like
other
libraries
that
are
not
sort
of
the
cork
line
goes
stuff
like
are
we
gonna
evolve,
like
you
know
the
rate-limiting
libraries,
because
it's
like
we've
done
that
in
the
past
and
then
fixed
everything
up
and
then
moved
on
with
our
life.
As
we
start
splitting
this
out,
that
stuff
is
going
to
become
that
much
more
painful,
I
think.
A
There's
a
question
of
what
should
our
dependency
updating
model
be?
It
definitely
should
be
automated.
It
definitely
should
be
automated,
but
what
should
the
model
be?
Should
it
be
a
release
and
automatically
update
model
or
should
be
a
test
at
green
and
automatically
update
model
right?
I
can
only
things.
E
That
we
can
do
there,
I,
don't
think
it's
necessarily.
Those
are
the
other
two
choices,
and
that
was
the
idea
around.
Can
we
actually
move
to
a
feature
tagged
thing
between
major
releases
so
that
we
can
actually
let
a
lot
of
stuff
sort
of
be
in
flux,
boom,
cut
a
release,
version,
delete
the
tags
and
then
move
forward
yeah
go
ahead,
I
did.
D
Want
to
kind
of
raise
like
I
think
a
lot
of
the
like
at
the
card
of
it.
Your
client
SDK
is
a
key
point,
which
is
we.
We
split
client
go
in
a
way
that
allowed
people
to
build
controllers,
because
most
of
the
consumers
at
that
point
in
time
were
things
that
wanted
to
move
out
of
tree
and
be
able
to
like
split
that
up.
Most
people
aren't
building
controllers,
but
it
wasn't
possible
to
do
two
ways
of
doing
controllers,
so
it
was
kind
of
like
a
lot
of
the
roof.
D
D
Think
that's
kind
of
the
trade-off
here
is
the
the
clean
refactor
is
doing
a
lot
of
work
in
core
kubernetes
and
then
gradually
drifting,
but
if
we
wanted
to
accelerate
it,
one
way
to
do
it
would
be
to
copy
the
whole
tree
and
say
this
is
now
an
isolated
problem
that
cube
control
is
drifting
from
upstream
and
it
needs
to
aggressively
snip
I
mean
it's
basically
be
you
can
either
do
it
slower?
You
can
do
it.
You
can
rip
the
band-aid
off.
D
It's
going
to
be
more
work,
you're
going
to
spend
the
same
amount
of
time,
either
way
it's
how
fast
you
prevent
that
the
new
from
coming
into
the
system,
if
you
Fort
Q
control
hard
today
copied
all
the
packages
renamed
all
the
packages.
Only
reused
client
go
only
reused,
API
only
reused,
API
machinery,
you
would
have
a
ton
of
duplication.
D
People
would
have
to
fix
bugs
in
two
places,
but
in
theory
or
mid-nineteen
theory
you
would
force
yourself
into
the
spot.
Where
now
you
are
drifting
independently
and
yet
you
wouldn't
pick
up
automatic
features
from
some
of
the
core
stuff
but
you'd
have
to.
You
have
to
concretely
make
changes
in
order
to
pick
that
up
it.
Would
it
would
pull
the
band-aid
off
now
I
mean
that's
the
other
option,
so.
E
Ilya
had
an
interesting
idea:
Ilya
you
want
to
talk
about
this
or
can
I
read
your
comments.
I,
don't
know
yeah
okay,
so
the
idea
is
that,
can
we
move
sort
of
the
basic
parts
of
cube
control
and
then
leave
like
the
internal
type
stuff
in
the
main
repo
right?
Could
we
lean
on
sort
of
like
cube
control
plugins
as
a
way
to
sort
of
you
know,
break
Cube
control
around
the
one
challenge?
Is
the
factory.
D
I
mean
like
really
the
heart
of
acute
control
is
the
factory
that
that
provides
the
abstraction
that
lets
someone
say
like
I,
want
to
take
cute
control
and
add
in
new
stuff
without
a
mouse
being
aware.
So
it's
really
the
factories
just
a
whole
bunch
of
interfaces
that
today
pulls
in
the
internal
types.
So
you
would
have
to.
A
Brian,
you
know,
I'm
I
was
just
gonna
+1.
This
idea,
so
one
thing
we've
been
discussing
is
making
the
you
know
the
Client
SDK
idea
was
mentioned.
We
also
need
a
server
SDK
for
people.
Building
new
API
is
whether
C,
RDS
or
aggregated
api's,
or
just
separate
api's
that
are
kubernetes
style.
Api
is
more
and
more
things
want
to
use
that
API
style
and
the
ecosystem,
as
well
as
in
the
core
project,
as
we
try
to
pull
things
apart.
A
I
think
that
generic
parts
of
queue
control
are
gonna
need
to
kind
of
go
with
the
API
machinery
as
sort
of
part
of
that
SDK.
So
if
you
have
kubernetes
style
types,
you
can
do.
Generic
creates
delete,
get
kind
of
out
operations
for
anything
that
speaks
communities
types,
so
I
definitely
see
us
going
in
that
direction,
where
the
basic
skeleton
is
very
general
and
doesn't
have
any
domain
specific
commands
at
all.
I
would
like
to
see
that
stuff
move
to
plugins.
B
End-Users
towards
practical
things
that
they
can,
you
know
what
will
actually
move
the
bar
right.
If
we
spend
a
bunch
of
time,
untangling
spaghetti
code
internally
in
kubernetes,
that's
gonna
take
a
bunch
of
time.
We're
not
gonna,
be
there
if
we
go
in
and
we
just
rip
it
out,
and
today
I
like
coop
control
into
its
own
repo
and
copy
a
bunch
of
stuff
over
and
start
splitting.
You
know:
what's
gonna
actually
move.
C
B
B
E
D
D
Maybe
the
the
underlying
point,
we're
saying
here
is
that,
if
you're
using
client
go
today
and
you're
not
capable
of
keeping
up
with
the
changes,
even
the
breaking
ones-
you're
not
being
served
by
this,
and
you
want
something
simpler
and
you
probably
aren't
using
all
of
the
the
gravy.
The
gap
today
is
that
you
have
to
pick
up
somebody
that
would
have
to
do
the
work
to
pick
up
the
new
API
types
so
that
I
think
that's
the
simpler
client
go.
D
As
we
continue
to
try
to
split
out
pieces
because
there
is
going
to
be
a
huge
amount
of
duplication
between
Cube
control
in
the
core
libraries
and
it's
gonna
take
a
year
refactor
the
majority
of
that,
and
it's
probably
not
duplication
that
they
keep
control
guys
on
their
own.
It's
like
I,
don't
have
any
desire
to
go
patch
in
two
places,
but
I
think
most
of
us
will
do
some
of
that
until
it
gets
to
that
point.
If
we
took
the
more
dramatic
option
in
the
short
term,
but.
B
That
that
gets
also
client
go
right.
That's
dealing
with
the
go
SDK,
maybe
here's!
Another
question
is:
do
we
have
enough
people
contributing
to
coop
cuddle
and
if
we
broke
it
out
into
its
own
repo
more
quickly,
would
we
get
more
contributors
to
it
and
then
therefore
be
able
to
drive
its
development
more
quickly
like
those
kinds
of
things?
A
B
Tried
I've
run
into
so
many
barriers.
I'm
just
gonna
be
hands-off
because
I
don't
want
to
deal
with
that
now,
like
I've
repeatedly
had
people
tell
me
that
and
some
people,
it's
he
here's
somebody
who's
new
who's,
given
a
thumbs
up
to
this
comment.
So
so
there's
a
real
person.
Kubernetes
kubernetes,
is
daunting.
E
That's
like,
if
we
do
this
there's
two
benefits
that
we
get
out
of
it
is
that
it's
a
down
payment
on
going
down
the
path
towards
NFC
and
I.
Think
that
there
is
that
idea
of
just
and
I
think
we're
gonna
see
more
velocity
with
Cube
control.
If
it's
broken
out,
we
can
have
a
smaller,
focused
group
that
can
just
get
done.
Wait.
E
Right
so,
at
the
end
of
the
day,
I
think
there's
two
problems
here:
there's
unlocking
velocity
in
terms
of
contributing
to
Cube
control,
moving
it
into
its
own
project,
with
its
own
pace
and
its
own
set
of
people
like
right
now,
I'm
sure,
there's
a
ton
of
fit
and
finish
that
we
can
do
with
cube
control.
People
are
building
tools
around
the
edges.
For
this
thing,
okay,.
G
So
that
would
be
true
if
you
I
mean
what
you're
saying
is
true
I
think
if
you
started
from
scratch
building
a
new
cou
cuddle
that
talked
to
the
existing
API
only
to
the
pub
and
only
vendor
didn't
public
hood,
then
you
would
unlock
a
lot
of
new
development
because
they
would
not
have
to
face
the
problem
of
why
that
Clayton
was
talking
about.
Why
does
the
factory
bring
in
the
Unversed
types
well,
but.
E
But
I
think
regardless,
regardless
of
I,
think
we
could.
We
could
do
a
new
cube
controller.
We
started
from
scratch.
That's
probably
not
a
good
idea.
We
could.
We
could
duplicate
a
bunch
of
stuff
it'll,
be
messy.
They'll
still
be
problems,
but
they'll
be
yeah.
It'll
still
be
a
problem.
We're
carrying
everything
that
everybody's
talking.
E
Misunderstanding
why
people
find
it
daunting
and
we
can
ask
Jenna
beard
like
why
but
like
I,
think
some
of
it
is
the
code,
and
some
of
it
is
just
like
there's
so
much
turn,
there's
so
much
noise,
there's
so
much
stuff,
that's
going
on
in
KK
that
it's
very
difficult
to
even
get
started
and
figure
out
how
do
I
build
stuff?
We.
D
Articulate
this
is
what
we're
saying
is:
worse,
is
better
right.
I
think
this
is
a
worse,
is
better
decision.
We're
saying
we're
willing
to
give
up
some
of
the
advantages
and
streamlining
that
we
have
in
the
current
model
and
deliberately
work
at
a
faster
or
less
coarse
pace
and
where
we
think
that
the
end
result
for
kubernetes
will
be
better
even
like
there's
a
lot
of
earlier
point
like
there's.
A
lot
of
I
wasn't
gilding
the
pig,
but
there's
a
lot
of
internal
due
diligence
that
happens.
D
That
may
not
be
strictly
useful
to
an
outside
person.
That
is
accumulating
complexity
and
refining
complexity
exists,
which
helps,
but
you
could
also
say
that,
like
to
the
outside
consumer,
none
of
it
matters
like
whether
GetGo
is
happy
or
not.
Hacky
is
irrelevant
as
long
as
it
does
with
someone
uses,
and
so
some
of
that
is
there
yeah.
But
if
you
want
to
speak
up,
please
hey.
H
I'm
going
to
everyone,
hi
I
am
muted
success,
so
I
honestly
have
just
some
really.
It
is
contributing
its
I.
So
my
coworker
and
sent
me
over
to
kubernetes
just
to
find
like
a
simple
spellcheck,
typo
pr2
start
against
kubernetes,
and
it
took
me
several
hours
even
just
to
get
set
up.
There's
first
of
all,
there's
a
legal
issue
which
I'm
sure
is
important.
H
That
was
that
was
tricky
and
then
I
had
some
issues
with
my
Linux
foundation,
account
which
you
need
to
have
which
is
very
inflexible
and
you
can't
delete
them
and
I
was
kidding.
It
was
part
of
it
was
my
fault,
because
I
had
two
accounts,
because
I
made
a
mistake
months
ago,
but
that
was
hard
and
I
will
say
that
the
help
desk
was
super,
responsive
and
super
fast,
but
it
was
still
there
was
still
just
administrative
stuff
to
wade
through
then
a
lot
of
them.
How
to
contribute.
H
Docs
are
located
in
communities
community
and
it
literally
took
me
four
or
five
four
links
deep
to
find
out
hey
what
even
is
the
process
of
contributing
with
github,
now
I'm
pretty
I'm
pretty
comfortable
with
github.
But
that's
not
necessarily
the
case
for
other
people
who
don't
may
be
born
into
open
source,
who
don't
use
github
the
whole
fork
and
clone
thing
is
very,
very
natural
to
me.
A
B
A
Questions
that
we
should
be
asking
right
like
if
we
use
the
same
build
tools
and
we
use
the
same
sanic
you
and
everything
else
that
you
know
moving
the
code
doesn't
address
any
of
those
things.
Those
are
kind
of
independent
issues,
we're
still
gonna,
have
a
CLA,
we're
still
gonna,
have
integration
and
into
and
test
we're
still
gonna
have
all
these
things
yeah.
D
I
think
that's
actually
a
hurdle
in
a
sense
is
like
even
splitting
out.
Cube
control,
Q
control
has
to
be
tested
very
similar
and
so
like
there's
an
evolutionary
process
like
if
we
did
like
a
like
a
harder,
short-term
split,
we're
still
going
to
have
to
have
all
of
the
same
pulling
it
in
otherwise
Q
controls
just
gonna
go
off
into
the
wild
for
six
months
or
a
year
and
then
come
back.
So
there
is
some
level
of
like
we
can
accelerate
some
parts,
but
not
all
yeah.
D
We
use
Q
control
in
the
ete
tests
so
like
there's
some
of
those
like
the
couplings
like
how
would
we
split
like
I,
don't
wanna
dive
too
much
but
is
like
I
think
some
of
that
coupling
is
if
Q
control
goes
off.
We
write
a
new
cube
control.
That
is
the
easiest
way
to
reason
about
all
these
problems,
which
is
yeah.
There's
just
this
separate
thing
that
tests
against
kubernetes
completely
independently.
How
many
of
the
cuts
are
we
willing
to
break
between
the
current
project
and
and
Q
control?
Is
it
half
of
them?
G
G
Not
copy
the
whole
thing
out,
I
think
we're
gonna,
learn,
I,
think
I'm,
learning
and
I
think
men'she
and
the
people
that
I'm
working
with
they're
learning
we're
improving
the
internal
API
is
the
way
the
sort
of
what
I
would
call
the
internal
API
is
the
internal
libraries
are
how
they
present
themselves
and
how
they
are
used.
Where
understanding
where
the
code
divisions
are.
If
we
copied
everything
out,
we
wouldn't
have
any
of
that
understanding.
G
We
would
just
have
two
messes
instead
of
a
better
understanding
of
how
the
existing
library
should
work
as
we
extract
stuff
into
common,
for
example,
we're
learning.
What
are
those
pieces
that
are
used
by
multiple
things?
Not
just
could
control,
but
kuben
min
and
other
SEO
lies.
You
might
imagine
writing,
although.
D
F
I
have
something
it's
not
actually
just
just
sort
of
it
seems
like
perhaps
the
actual
problem
that
that
we
have
is
that
QL
is
just
essentially
I'd
like
Apple
to
to
tell
other
things
in
the
in
the
main
Reaper,
and
it's
not
like
a
you
know.
It's
not
like
a
pure
client
right
say
if
we,
if
we
were
to
put
it
into
a
separate
Reaper
and
make
it
a
pure
client
as
part
of
that
that
make
low
sense
and
having
it
in
a
separate
Reaper,
you
know,
would
stop
anybody
from
adding
any
new
code.
G
E
But
I,
even
even
if
we
still
have
a
hairball,
but
it's
a
hairball
and
a
separate
repo,
there's
still
a
social
aspect
like
ownership
there.
You
can
form
a
smaller
group
that
feels
ownership
over
that
thing.
Moving
it
forward,
it's
much
more
aligned
with
this
SIG
versus
sort
of.
Like
the
you
know,
we
don't
have
a
strong
relationship
between
cigs
and
directories
right
now,
that's
something
we
need
to
fix,
but
like
there
is
a
stronger
relationship
between
some
of
these
direct,
these
repos
that
are
broken
out
and
say:
yeah
I.
A
A
A
G
D
It
might
actually
be
a
little
bit
easier
because
it
would
stop
the
flow
of
new,
well
I.
Think
the
hard
part
is
effectively
to
fork
the
repo.
Today
you
can
copy
most
of
kubernetes,
kubernetes
and
you'd
start
deleting
stuff
and
I.
Think
that's
really
the
it's
a
scary
thing,
knowing
how
big
kubernetes
kubernetes
is
and
it's
it
would
create
that
upfront
barrier
until
people
had
gone
and
hacked
away
a
lot
of
this
stuff.
D
There's
like
two
or
three
projects
there,
like
I,
could
take
up
two
or
three
projects
that
are
doing
similar
things
to
this
right
now,
like
openshift,
is
doing
this
with
some
of
its
components,
and
it's
very
it's
very
daunting
until
you
get
the
cut
going,
but
the
end
result
is
something
that
does
feel
more
isolated
and
I.
Think
Joe
liked
it
I
I
could
make
the
argument
that,
if,
if
we
did
this,
it
would
stop
the
the
churn
and
it
would
make
it
more
appealing,
but
it
would.
D
It
would
definitely
delay
the
incremental
value
you're
getting
for
a
little
bit
longer
term
right,
like
you're,
not
going
to
get
the
you're
improving
the
core
cube
and
keep
control
at
the
same
time.
Right
now,
with
the
work
that's
going
on,
if
we
made
the
cut,
we
would
have
to
accept
a
few
months
of
so.
A
D
A
D
A
D
E
E
A
Would
need
to
get
there
yeah
I
think
that
would
be
a
desirable
in-state.
So
one
problem
we've
had
in
past
times.
We
tried
to
pull
anything
from
anywhere
else
is
that
it
creates
a
significant
percentage
of
flakiness.
So
we
would
have
somebody
who
would
have
to
put
a
concerted
effort
into
mitigating
that.
What
do
you
mean
like
like
line
polling
cloning
from
github
is
not
reliable?
Pulling
writers
from
other
sources
is
not
reliable.
Somebody
needs
to
like
go
build
a
retry
loop
or
something
can
I
make
a
suggestion
here.
B
D
That's
fine.
There
have
been
a
lot
of
discussions
about
this
I
think
everybody
was
kind
of
waiting
for
the
first
we're
playing
chicken
with
the
split
train
on
this,
which
is
it's
going
to
be
painful
and
everybody
knows
it,
and
so
everybody's
kind
of
waiting
for
the
first
victim.
If
acute
controls,
good
acute
control,
who's
gonna
be
the
first
victim
I,
don't
know
that
anybody
else
is
closer,
and
so
this
is
something
that
a
testing
for
discuss.
We're
good
I,
actually
want
to
go
back
to
Joe's
point
like
we
have
it
we've.
D
You
know
contributors,
we're
gonna
have
to
do
some
things
to
concretely
get
people
from
the
community
and
that's
going
to
come
at
an
upfront
cost
without
the
promise
of
reward,
I
mean
I.
Think
that's
the
risk
is.
Where
are
we
willing
to
take
the
risk
of
of
trying
to
force
this
into
a
scenario
where
we
are
accessible
to
the
new
people
and
accept
that
it
we
will
get
less
and
do
less
in
order
to
make
that
happen,
I
mean
it's
not
a
it's,
not
an
engineering
calculation.
It's
a
open
contribution.
Calculation.