►
From YouTube: Kubernetes SIG CLI 20171220
Description
Kubernetes SIG CLI meeting. Dec 20, 2017
C
C
F
F
C
F
Part
of
it,
it's
like
that,
so
so
the
I
think
this
more
accurately.
The
scope
is
for
the
package
that
we
want
to
keep
using
in
the
new
in
the
cubicle
repo.
How
are
we
going
to
manage
them?
We
can
either
move
them
to
the
new
repo
and
then
rewind
back
under
or
we
can
fold
them,
so
it
won't
under
evolved
with
the
new
fork
in
the
queue
cattle
repo
or
without
jeopardize
the
stability
of
the
or
the
cube
cut'
evaluating
the
coracle.
G
So
I
have
a
question
here
about
this
I
see
there
are
new
tools
such
as
conflate
and
those
are
in
the
Kubb
control
repo
as
I
understand
it.
These
are
kind
of
experiments.
Is
the
intent
of
this
to
say
they
are
going
to
now
be
part
of
coop
control
because
we're
bringing
the
experiments
into
coop
control
itself
or
those
going
to
be
handled
separately.
G
B
G
E
D
B
Since
that
is
I
think
that's
a
probably
a
more
involved
question.
Maybe
we
should
and
its
own
discussion
separately,
but
but
initially
the
this.
No
part
of
this
has
conflate
as
being
assumed
to
be
part
of
coop
control.
However,
we
do
need
to
I
think
it's
an
example
of
a
place
where
we
were
trying
to
build
something
and
had
dependencies
on
packages
in
the
kubernetes
Trinette
ace
that
were
not
available.
H
H
G
I
can
add
to
that
on
helm.
They
tried
to
use
external
packages
such
as
client
go
and
whatnot,
and
they
found
it
to
be
difficult,
so
they
ended
up
pulling
in
kubernetes
kubernetes
copying
packages
out
and
to
really
work
with
it.
Well,
especially
against
something
like
head
with
the
amount
you
have
to
deal
with,
it
can
be
difficult
either
you
have
this
massive
vendor
directory
or
you
end
up
with
trying
to
pull
them
in
at
Build
time,
which
slows
down
your
CI
and
kubernetes.
Kubernetes
is
quite
large,
which
means
it
has
a
big
impact.
B
With
both
those
things,
but
both
solutions
are
bad,
but
yeah
actually
I
think
that's
part
of
why
and
other
proposals,
and
maybe
this
one
I've
been
starting
to
lean
towards
just
write
it
in
the
coop
control
director
like
right,
what
we
need
there
and
instead
of
spending
months,
trying
to
get
a
package
move
from
one
place
to
another
to
certain
dependencies.
If
we
only
need
10%
of
what
it
does,
90
percent
of
the
time
just
write
a
simplified
version.
G
A
G
A
E
A
I
mean
I
said
we
are
gonna,
create
duplicates,
we
can
create,
duplicates
and
new
repos
versus
creating
just
and
then,
and
then
we
solve
the
problem
for
other
folks
moving
forward.
It's
like
if
we
look
to
actually
say
pull
cube
admin
out.
Also,
it's
gonna
actually
share
some
of
the
same
issues
in
terms
of
these
things
and
I'd
hate
to
actually
see
all
copy
pasted
code
across,
like
n
different
repos,
as
we
pull
and
different
things
out
that
men
can't
control.
A
G
I
B
We
need
to
decide
the
there's
the
number
of
variables
there,
because
we're
also
talking,
there's
folks,
who
want
to
write,
plug-in
discovery
and
installation
and
so
there'd
be
overlap
with
the
plug-in
installation
and
discovery
with
auto-update
such
as
like
checking
like
finding
the
binaries
checking
their
signatures,
installing
them
all
that
sort
of
stuff.
So
I
think
my
preference
would
be
not
to
write
things.
J
G
Okay,
I
also
want
to
make
a
comment
on
the
API
machinery
related
stuff,
actually
taking
the
staging
repo
and
syncing
it
out
and
then
being
able
to
use
newer
stuff.
It
turns
out
that
for
consumers
of
that,
it's
it's
problematic.
There
are
a
lot
of
complaints
from
people
who
have
to
consume
that
setup,
and
so
that's
the
reason
you'll
generally
find
that
I'm
not
a
proponent
of
it.
It's
just
the
active
case
of
people
who
have
to
use
this
regularly
or
consume
new
versions
like
take
client
go.
H
Doesn't
seem
to
be
related
to
staging
and
that
is
to
seems
to
be
related
to
inside
of
client
go
now.
I
will
admit
that
it
is
painful
technically
it
does
follow
semantic
versioning,
but
we've
never
had
a
semantic
version
of
planet
go
last
more
than
three
months
books
of
the
problem
that
you're
hitting
here
not
yeah
the
stage
no.
H
Up
a
week
and
a
half
ago,
we
managed
to
pull
that
infrastructure
outside
of
Google,
Red,
Hat's,
hope
hosting,
that
infrastructure
now
and
so
Stefan
has
direct
access
and
so
does
Chow.
So
now
both
of
them
can
access
to
to
the
bot
to
keep
it
up-to-date,
keep
it
working
and
since
we
open
shift
are
also
heavy
users
of
client
go,
we
care
when
it
doesn't
sync
so
I
would
expect
that
become
more
stable
over
time.
Okay,.
G
Yeah
and
part
of
the
the
issue
of
staging,
which
you
point
out
here,
isn't
the
syncing:
it's
actually
the
heavy
refactoring
inside
of
kubernetes
that
causes
the
packages
to
be
heavily
refactored
all
the
time
and
that's
the
bigger
issue
that
impacts
anybody
consuming
it
downstream
and
and
I.
Don't
know
if
there's
a
nice
way
to
to
make
that
easy.
B
Is
that
it's
still
building
inside
the
core
repo
like
we're,
trying
to
move
away
from
building
more
and
more
stuff
inside
the
core
repo,
because
it
has
three-month
iteration
cycles
and
all
the
reasons
that
have
been
listed
in
the
past
and
so
by
building
it
as
part
of
staging.
We're
still
doing
that.
A
Okay,
so
another
like
another
thing
to
recognize
is
that
is
that
this
stuff
is
relatively
stable
compared
to
a
lot
of
the
refactoring.
That's
going
on
with
client
go
one
option
here
is
that
we
could
take
these
utilities,
pull
them
out
into
separate
repos
vendor
them
back
in
and
then,
when
you
update
those
things
you're
going
to
have
to
revenge,
that
is
painful,
nobody's
going
to
disagree
with
that.
But
what
it
means
is
that
we're
going
to
actually
think
very
carefully
about
changing
and
updating
these
dependencies
and
and
you'll
have
to
make
sure
they
can.
A
As
we
move
it,
something
like
a
deaf.
It
may
make
more
sense
to
actually
resolve
these
latest
version
and
do
more
integrated
tests
across
that
stuff,
but,
like
as
of
like
you
know,
the
the
problem
with
staging
is
that
it
it
allows
and
almost
encourages
tightly
coupled
refactoring
right
and
we
vendor
it
out.
Then
then
you
have
to
think
about
refactoring
in
a
way
that
so.
H
Joe
I
gotta,
say
I
have
experience
doing
that
the
other
way
around
too.
If
you
look
at
OpenShift,
API
and
OpenShift
client
go.
Those
are
separate
libraries
that
then
get
vendored
into
OpenShift
origin,
the
task
of
actually
maintaining
synching
opening
the
poles
that
keep
everything
consistent
and
and
being
able
to
do
the
series
of
poles
in
order
is
very
non-trivial.
I
think
that
OpenShift
is
able
to
manage
it
largely
because
we
have
extremely
tight
control
over
what
goes
in
to
those
repos,
because,
right
now
it's
it's
just
API
and
client
go
I.
H
A
Di
just
to
be
clear,
I'm
not
suggesting
this
for
client
go
I.
Think
like
we
got
to
recognize
that
there
are
parts
of
the
system
that
are
still
under
heavy
reef
actors,
pressure
like
Clank
Oh
with
other
parts
of
the
system
which
we
just
want
to
share
and
honestly
don't
they
don't
change
that
much
and
what
is
the
right
way
to
actually
share
those
and
right
now
the
discussion
is
gone
towards
forking,
which
I
think
it
is
probably
gonna
do
as
a
long
term
disservice
content.
So.
G
We
break
it
out
into
a
common
library
package
and
we
fix
it
up,
make
sure
it's
got
proper
tests
and
it
stands
on
its
own
and
it
stands
within
own
right
and
then
once
that's
ready,
we
can
bring
it
back
into
coop
control
and
if
it
makes
sense,
we
can
vendor
that
also
back
into
kubernetes
itself
and
remove
the
outside
code.
Because
now
it's
a
true
dependency.
So
then
we
would
start
to
add
more
to
that
over.
H
Time
we
also
have
experience
taking
a
piece
that
used
to
be
integrated
and
moving
that
piece
out
in
sort
of
a
clone
repo,
pull
it
out
with
a
minimal
set
defender.
What
we
need
to
and
rebuild
it
for
separate
development
when
you
try
to
pay
down
the
debt
on
that,
it's
non-trivial,
it's
not
that
it
shouldn't
be
done.
It's
that
the
work
should
not
be
underestimated
and
that
what
happens
during
that
intermediate
stage.
Is
you
effectively
freeze
your
repo?
H
Your
repo
is
stuck
because
you're
stuck
on
old
levels
of
vendor
dependencies
that
you
can't
get
rid
of,
and
you
can't
move
forward
and
you
can't
move
backward.
So
you
end
up
having
to
to
snip
your
links
anyway,
and
you
do
it
under
time
pressure.
One
thing
that
I
would
be
interested
in
looking
at
is:
if
I
look
at
you
control
when
I
see
three
logical
units
right,
I
see
one
set
of
truly
generic
functionality,
and
these
are
things
that
use
runtime
unstructured
from
last
year.
H
H
It
would
be
something
like
reconcile
and
I
would
be
looking
at
at
either
sniffing
those
and
saying
you
know
that's
a
separate
tool.
You
can
plug
it
in
or
trying
to
change
the
problem
into
something
where
you
have
a
generic
resource
that
you
could
hit,
or
maybe
both
but
I
would
look
at
it
and
say
that
those
are
our
buckets
and
I
think
that
any
tool
that
gets
developed
will
eventually
have
those
buckets
now
I
do
apologize.
I
have
not
been
coming
to
this
meeting
regularly.
B
Discussion
I
think
there's
even
a
couple
more
and
we've
also
talked
about
just
generic
CLI
infrastructure
that
doesn't
depend
on
client,
go
or
API
machinery
or
any
of
that
stuff,
yeah
and
maybe
another
way,
I
think
similar
to
how
you've
talked
about
it.
I
kind
of
think
of
it
as
stuff
that,
like
just
purely
CLI
stuff,
that
depends
on
client,
then
there's
stuff
that
depends
on
client
go
API
machinery
or
that
sort
of
stuff,
but
does
it
in
a
completely
generic
way?
H
H
But
what
I
do
to
split
it
into
those
buckets
I
would
actually
I
would
actually
try
to
attempt
that
split
I
would
I
would
do
things
like
identify
my
links
and
start
cutting
them
similar
to
how
we
did
we
API
machinery
and
API
server
and
client
go
and
metrics
in
the
past
right.
The
past
year
was
about
that
and,
and
it
is
possible
to
identify
and
mutate
those,
and
once
you
have
the
link
snipped,
then
you
get
a
lot
of
freedom
about.
What
is
you
want
to
do?
Do
you
want
a
separate
repo?
H
B
I
B
There
are
maybe
two
competing
goals
within
that
one
is
we
just
want
to
build
stuff
in
the
coop
control
repo
period
like
we
just
want
to
be
able
to
build
stuff
there
us
as
a
sake
right
and
then
the
second
goal
that
someone
competes
with
that
is.
We
want
to
enable
the
ecosystem
of
folks
like
helm
or
any
other
CLI,
to
be
able
to
take
advantage
of
that
same
stuff
and
what
is
like
the
simplest
way
to
to
address
that
solution.
I.
H
H
Right
a
you
know,
the
one
path
is
an
evolution
of
what
we
have
from
here:
they're
sniffing
--kx
and
then
move
the
pieces
out
right,
move
a
core
piece
of
functionality
out
and
what
you're
left
with
is
you
get
rid
of
all
the
special
stuff
things
that
have
heavy
knowledge
of
what
it
is?
They're
modifying
that
stuff
doesn't
end
up
living
in
control.
B
I,
don't
think
yeah
thanks,
I
agree
with
what
you're
saying
like
I,
don't
think
we
were
ever
considering.
We
get
any
of
that
stuff
out
as
part
of
this
proposal.
This
proposal
isn't
about
how
to
move
like
command
a
foo
or
bar
Oh.
This
command
is
this
proposal
is
about
how
do
I
read
a
file
into
a
config
object,
because
the
library
to
do
that
is
in
could
we're
not
a
super
netis.
They
want
to
do
it
for
an
excu
control.
Okay,.
C
So
just
a
question:
these
does
not
extend
after
listening
this
specific
proposal
for
actual
commands.
Right
because
that's
that's
another
another
issue
we
have
right.
We
have
a
lot
of
commands
on
our
client
on
cube
CTL.
Specifically,
that
does
a
lot
of
has
a
lot
of
logic
in
it
right
business
logic.
Take,
for
example,
keeps
it
y'all
run.
C
H
If
you
have
a
spawn
staging
to
trumpet,
you
would
be
able
to
start
turning
a
ratchet
and
you'd
be
able
to
move
the
resource
builder
and
you
probably
are
gonna
want
a
consistent
way
to
handle
a
cube.
Config
file.
Loading
I
talked
about
whether
you
like
that
sort
of
file
loading
as
it
is
or
if
it
would
change,
but
you
would
end
up
being
able
to
move
that
piecemeal
as
well,
and
as
you
find
more
of
these
utilities,
you
can
move
them
out
and
ensure
that
their
dependency
trees
don't
get
don't
regress.
B
H
H
I
guess
aging
has
a
required
end
state,
but
as
an
intermediate
state,
I
think
it
would
help
you
tremendously
I
think
you'd
be
able
to
to
perform
the
reef
actors
that
you
need
fairly
quickly
right.
We've
done
that
in
the
past,
joe-joe
beta
complained
about
it
bitterly,
but
but
I
think.
In
the
end
he
was
happy
with
where
we
landed,
but.
H
A
I
also
think
that
our
dependency
management
tools
are
not
up
to
snuff
here.
Semantic
versioning
doesn't
work
well
when
you're
doing
active.
You
know
refactoring
type
of
things
because
you
essentially
have
to
like,
if
you're,
if
you're,
making
a
lot
of
interface
changes,
it's
very
difficult
to
manage
that,
because
you
know
across
these
things
with
semantic
versioning,
because
we'd
be
at
semantic
version,
number
387
right
if
we
were
actually
doing
it.
That
way.
A
B
I'll
have
one
solution
for
that.
If
you
don't
mind,
go
man
I,
think,
there's
a
way
to
an
Antoine
told
me
about
this
by
the
way.
So
he
can
correct
me,
but
my
understanding
is
there's
a
way
to
cut
branches
and
have
just
hide
two
packages.
So
when
you
break
semantic
versions
or
when
you
break
when
you
break
your
backwards-compatibility
in
like
have
the
package
renamed
and
keep
the
old
package
at
the
same
time
like
yeah.
G
G
So
you
end
up
with
lots
and
lots
of
renaming
where
this
gets
hard,
especially
for
consumers
is
the
fact
that
we
do
keep
breaking
the
API
and
the
fact
that
we
just
keep
doing
this.
It
makes
it
hard
for
people
who
want
to
consume
it
and
now
we're
talking
about.
Well,
we
do
this,
and
so
how
do
we
try
to
make
it
work?
Well,
when
we
do
this,
but
the
problem
is
that
we're
doing
it
just
makes
it
hard
for
downstream
consumers,
and
it
probably
always
will
in
the
way
a
lot
of
people
solve
this.
G
Is
they
don't
break
their
API
is
very
often,
and
so
is
there
a
way
we
could
take
some
of
these
and
move
them
into
we're,
not
going
to
break
the
API
all
the
time,
and
we
can
do
it
in
a
way
where
we
do
additive
things
rather
than
breaking
the
API,
because
that's
that's
the
typical
case.
That's
why
you
don't
see
a
whole
lot
of
projects
out
there
with
semantic
version
300
when
they
use
that
conversion
scheme,
because
they're
able
to
not
break
their
public
API.
All
the
time.
B
A
Is
a
tool
you
can
use
it
in
multiple
ways
you
could
use
staging
where
you
still
take
a
semantic
differences
very
very
seriously,
but
it's
not
been
done,
but
it's
extra
work
to
do
that,
whereas
if
you
break
it
out
into
a
into
another
repo
and
revenger
it's
you
know
you
can't
you
can't
avoid
it.
You
can't
avoid
takings,
you
know
breaking
changes
seriously,
yeah
so.
B
I
have
one
question
like
the
initial
thing
he
said,
I
agreed
with,
which
was
I'd
like
these
libraries,
as
we
write
them
to
be
used
by
other
commands.
I
also
think
that,
as
written
today,
they
were
never
meant
to
be
consumed
by
external
folks,
which
has
a
bunch
of
other
factors
that
are
important
to
consider,
such
as
the
interfaces
very
complicated
and
relatively
poorly
documented,
and
assume
that
you
have
a
firm
understanding
of
like
API
machinery.
So.
A
I
mean
I,
wouldn't
be
opposed
to
taking
these
things
renaming
package
to
internal
right
which,
like
when
you
know
which
didn't
have
meaning
when
you
know
package,
was
creating
in
kubernetes
and
then
and
then
just
have
an
ongoing
effort
and
it's
a
tax
that
we're
gonna
have
to
pay
to
essentially
create
long
term
interfaces
around
those
things
we
can
commit
to
and
then
moving
them
out.
That's
a
painful
process.
These
things,
hopefully,
are
going
to
be
simpler
than
a
lot
of
the
client
goes
from
API
machinery
stuff.
A
Where
we're
honestly,
you
know,
that's
still
an
ongoing
process
to
be
able
to
create
the
right
interfaces
around
those.
Hopefully
we
can
find
some
stuff
that
we
can
break
out
sooner
rather
than
later,
but
it's
gonna
be
I'm,
going
to
work
to
take
that
stuff
and
actually
create
libraries
out
of
it.
So.
H
A
But
here's
the
thing
David
I,
think
technically
that
stuff
is
broken
out,
but
practically
because
every
new
version
introduces
breaking
changes.
I
didn't
say
there
would
be
an
end
state
I
said
well,
yeah
I'm,
just
I'm.
Just
saying
is
that
is
that
it?
It
seems
like
it's
a
it's
a
it's
still
an
in
process
type
of
thing,
I
think
the
end
state
that
you're
right
I
mean
you
didn't
say
with.
H
An
S,
so
if
you
in
three
months,
you
were
able
to
eliminate
all
your
bad
links
if
you
were
able
to
actually
make
yourself
so
you
built
on
top
of
the
currently
exposed
repos,
you
would
have
your
choice
of
what
you
want
to
do.
If
you're
a
leaf,
you
would
have
your
choice
of
what
you
wanted
to
do.
If
nothing
else
in
case
okuru,
Nettie's
vendored,
you
back
sure,
make
yourself
a
separate
repo
and
delete
yourself
from
staging
that's
a
thing
that
could
be
done.
We've
done
that
and
OpenShift
as
well.
It
worked.
H
H
B
A
Well,
I
think
that
there's
a
whole
bunch
of
other
stuff.
That
is,
in
my
mind,
there's
a
question
of
like
which
of
this
stuff,
like
what
is
the
granularity
of
libraries
that
we
want
to
create
out
of
this,
because
you
look
at
some
of
the
stuff
that
gets
bundled
and
declined.
Go
and
I
actually
think
that
that
should
probably
be
broken
out
is
independent
utility.
A
Also
like
some
of
the
stuff
around
like
rate
limiting
queue,
really
is
a
more
fundamental
thing
that
doesn't
necessarily
need
part
of
client
go
and
can
be
useful,
not
just
a
kubernetes
but
to
a
bunch
of
other
stuff.
People
could
take
dependencies
on
that
in
a
way
where
they
can
be
sure
that
it's
not
going
to
change
so
I
definitely
think.
There's
some
stuff,
that's
ready
to
be
finished
off
and
broken
out.
I
I,
you
know
the
the
idiomatic
go
is
to
create
you
know
not
necessarily
JavaScript
size
packages,
but
but
smaller
packages.
B
I
guess
this
would
be
yeah.
Sorry
if
I,
my
thoughts
are
all
over
the
place
here,
but
it's
one
thing
with
the
API
machinery
where
well,
it
was
successfully
moved
out
in
the
sense
that
it
could
be
vendored.
Externally,
the
API
interfaces
are
in
the
interfaces
themselves
are
not
in
a
in
a
place
where
they're
easily
consumable
like
we
don't
you've
a
created,
API
servers
right.
We
have
I,
don't
think
we
have
a
single
argued
API
server
that
won
by
a
poor,
organized
hacker.
B
H
That's
a
statement
not
on
the
mechanism
you
used
to
do
it,
but
on
what
was
moved
and
how
stable
or
lack
of
stability
we
knew
it
was
and
would
be
right.
When
we,
when
we
looked
at
something
like
API
machinery,
it
lives
in
service
to
the
API.
We
knew
there
were
holes
in
it
already.
We
knew
that
there
were
going
to
be
painfully
to
poor
function,
pieces
of
how
you
handle
decoding
and
codecs
and
copies
and
conversions,
and-
and
we
knew
there
were
going
to
be
changing.
H
So
we
didn't
put
the
effort
in
we
to
trying
to
stabilize,
because
we
knew
we
were
going
to
change
it
if,
instead,
you
have
something
where
you
want
to
create
a
different
unit.
Where
you
know
you
want
your
unit,
you
need
to
separate
it
from
where
it
is
today,
and
then
you
want
to
make
a
an
easy
path.
Api
for
using
it.
That
is
certainly
achievable.
H
You
have
roughly
what
you
want
today,
at
least
in
some
cases.
You
have
roughly
what
you
want
today.
You
keep
using
resource
builder.
As
an
example,
I,
don't
know
whether
all
that
a
good
thing
or
a
bad
thing
overall,
Clayton
and
I
photo
for
it
many
times,
but
it
exists
where
it
is
today.
You
know
there
are
other
pieces
you're
going
to
move
with
it.
As
you
look
at
your
dependency
tree,
you
are
likely
to
find
more
right.
You
have
resource
builder,
you
have
handle
queue,
config
files
where
you
may
choose
to
strip
out
debt.
H
H
I
see
that
as
a
pre
core
piece
of
functionality
for
something
like
you
control
and
there
is
I-
know,
there's
at
least
two
packages
for
handling
it
today
and
keep
control
your
your
library
is,
is
going
to
be
fairly
significant
and
a
move
initially
to
staging
does
not
preclude
you
eventually
moving
out,
but
it
gives
you
time
to
figure
out
exactly
what
it
is.
You
want
to
do
with
queue
control
as
a
whole.
H
Before
you
end
up
committing
yourself
to
to
trying
to
manage
vendor
dependencies
on
things
that
are
fairly
fast,
moving
right,
like
today,
a
resource
builder
without
client
go,
it
might
be
possible,
but
I,
don't
know
that
I
would
try
it
a
resource
builder.
Without
API
machinery
is
intractable,
so
I
mean.
B
B
He
says
that
builder
without
client
go
is
not
doesn't
make
sense
without
API
machinery
doesn't
make
sense,
but,
like
I,
think
what
we
want
to
use
resource
builder
for
conflate
for
is
just
to
walk
the
directory
structure
in
parse
the
config
files
until
I
captured
objects
right.
So
why
do
we
need
all
this
complexity?
I?
Don't.
H
Know,
I
suspect
that
you
still
want
to
have
something
that
was
able
to
manage
shortcuts
for
you
and
something
that
was
able
to
handle
category
expansion
and
a
thing
that
was
able
to
create
field
selectors
and
label
selectors
on
your
queries,
I
think
you're
still
going
to
want
those
things
and
and
well
I
mean
if
you
want,
if
you
want,
sometimes
it's
it's
I,
don't
say
impossible
to
add
on,
but
having
it
it's
a
fairly
common
use
case.
If
you
look
at
commands
that
try
to
select
objects
so.
G
Can
I
back
up
here
is
part
of
the
problem
and
I
know
this
has
been
part
of
the
problem.
Working
on
some
of
the
other
projects
is
that
things
like
client
go
required
to
use
API
and
API
machinery
they
end
up
being
coupled
because
the
way
things
leak,
the
api's,
are
constantly
changing,
and
so
the
packages
that
we
have
are
moving
fast
and
breaking
things
which
makes
it
really
hard
for
things
that
use
it.
G
B
Don't
that
hasn't
been
my
biggest
concern.
Okay
I
mean
it's.
It
is
a
problem.
I
agree
with
it,
but
I
know
I.
Think
my
biggest
concern
is
the
the
entry
level
complexity
to
use
any
of
our
libraries,
so
I.
Guess
in
my.
If
you
look
at
setting
up
resource
builder,
it's
like
six
six
things
you
need
to
set,
even
if
you
just
want
to
like
walk
a
file
directory
as
far
as
I
can
tell-
and
maybe
maybe
I
don't
even
understand
it
correctly.
B
G
A
A
D
B
Yeah
I
agree
with
you:
David
I've
been
in
a
I've
gone
with
Jordan
and
him
and
I
trying
to
review
a
PR,
and
neither
one
of
us.
The
comment
would
neither
one
of
us
to
make
sense
of
the
documentation
that
does
exist
and
and
I
think
the
the
problem
there
is
just
that,
like
it
was
written
well
at
the
time,
was
written
in
to
folks
like
it
has
to
do
what
they
grow
the
project
right,
because
the
people
who
wrote
it
at
the
time
they
wrote
it
under
straighter
understanding
of
all
you.
H
You've
given
played
too
much
credit
for
initially
creating
it,
it
was
ok
when
it
was
initially
great
I,
don't
know
if
I
was
the
initial
reviewer
on
it
or
not,
or
just
the
initial
victim
it's.
It
is
not
a
trivial
thing
to
understand,
as
you
try
to
add
pieces
to
it,
you're
right
in
the
sense
that
it's
it's
trucks,
don't
make
it
easy
to
add
progressive
discovery.
Just
isn't
the
thing
that
it
it
handles.
Well
one
thing
it
does
do
well.
H
What
I
do
like
is
that
if
you
construct
it
from
one
of
the
news
or
whatever
it
is,
you
never
get
to
a
point
where
you
call
a
function
and
it
just
panics
right
or
it
fails,
because
you
didn't
set
a
rest
mapper
or
you
didn't
set
a
category
expander
or
you
didn't
set
a
decoder
like
as
long
as
you
actually
call
in
sector
pieces.
Whatever
you
call
down
the
path
will
work,
it
did.
H
Good
job
his
his
doc
Clayton
had
a
comment
requesting
and
talk
about
it
next
year
because
he
was
on
vacation
I
guess
we
can't
hold
everything,
but
this
is
actually
a
pretty
significant
idea.
The
idea
that
we
would
basically
abandon
the
current
keep
control
create
a
new
tool
is
is
significant.
H
H
H
A
This
so
what
I'm
gonna
say
is
that
if
we
do
go
with
that
plan,
I'm
gonna
I'm
going
to
call
an
audible
and
say
hey.
This
is
something
that
is
probably
like
a
kept
level.
Sega
architecture
communicated
super
widely
get
a
lot
of
feedback
type
of
thing
like
I
mean
I,
don't
want
to
I,
don't
want
to
like
create
problems
for
you,
but
I
think
there's
going
to
be
so
many
people
and
stood
in
this,
that
that
it's
going
to
pay
to
make
sure
that
that
nobody
gets
surprised
yeah
way.
H
H
B
B
Well,
while
not
having
to
start
over
from
scratch
and
the
specific
proposal,
there
suggest
that
we
effectively
vendor
all
of
kubernetes
kubernetes
into
another
binary
and
take
the
Cobra
command.
That
is
the
COO
control
Cobra
command
and
just
attach
it
to
a
new
route
and
with
the
specifics
of
been
during
their
being,
we
don't
vendor
it
off
the
route.
We
don't
do
the
flatten
dependencies.
We
basically
do
like
it
get
sub
module
equivalent,
where
we
just
sucked
down
the
whole
repo
pull
out
the
Cobra
command
and
then
attach
it
to
a
new
public.
H
So
we're
you
done.
Thank
you.
Okay,
so
I
I'll
say
that
I
have
fairly
significant
experience.
Doing
this,
I
have
tried,
building
multiple
libraries
that
use
that
make
use
of
client
go
without
stripping
vendors
right
and
it
seems
on
the
surface
like
it
should
be
a
very
easy
thing
to
do.
Right,
I
have
this,
we
have
go.
Ling
will
allow
you
to
say
this
is
my
interface
and
I
go
in
with
interface,
make
make
use
of
it
from
a
practical
perspective
that
has
actually
worked
zero
times.
For
me,
a
sum
total
of
zero
times.
H
What
happens?
The
first
thing
is
that
going
doesn't
recognize
a
second
order,
interface
relationship,
so
if
I
have-
and
we
use
those
everywhere.
So
if
you
have
an
interface
that
has
a
function
that
returns
a
different
interface,
when
you
try
to
create
a
compatible,
it's
it's
not
recognized
as
compatible.
You
cannot
pass
it
so
in
RN
cube
the
example,
canonical
example
is
like
codec.
Codec
has
a
function
for
a
decoder
that
returns
the
decoder.
Well,
you
can't
do
that
because
it
no
longer
performs
codec,
because
the
decoders
hey.
H
Starting
here,
because
the
next
thing
that
happens
to
you
once
you
resolve
those
once
you
once
you
start
handling,
that
is
that
the
libraries
that
you
depend
upon
will
be
hitting
Global's
that
exists
inside
of
the
Golan
core
libraries
and
the
simplest
example
is
G
log.
Everyone
uses
G
log,
you
accidentally
vendor
it
twice
in
two
different
strip
and
or
non
strip
vendor
trees.
Your
process
mechanics
as
soon
as
you
started
B,
because
you
have
two
copies
of
blog
they're,
both
hitting
the
same
flag
at
double
registers.
H
There
are
other
library
examples
that
do
the
same
thing
with
like
a
default:
Global
HTTP,
server,
I
didn't
even
know,
was
a
thing
but
it'll
panic
when,
when
two
unrelated
live
or
when
two
of
the
same
library
that
you
vendor
twice
both
try
to
set
a
handler
and
it
it'll
drive
you
bonkers,
so
the
straight
gets
sub-module
like
you
can't
you
can't
do
that,
because
you
can't
actually
make
it
build.
So
it's
not
a
good
stuff.
Much
well
anymore.
Right,
like
a
builder,
doesn't
work.
So
you
start.
What
do
you
mean
you
build
it?
H
They
get
sub
module
includes
the
entire
vendor
tree
right,
like
that's
the
point
of
the
sub
module,
it
is
the
entire
tree
built
in
and
now
you
have
something
that
you
can't
remove,
because
it's
part
of
a
that
gets
sub
module
that
you
want
to
keep
in
sync
and
so
you're
stuck
right.
So
it's
it's
very
problematic.
H
Both
that
use
client
go
and
a
some
sort
of
core
functionality
that
you
provide.
That
says
the
cube,
control
parsing
some
sort
of
infrastructure
you
provide
if
you
have
to
agree
on
levels
you're,
never
able
to
vendor
again,
because
what
happens
is
one
piece
will
move
up
your
other
pieces?
Won't
your
core
infrastructure
that
you
have
built
will
require
a
level
that
doesn't
match
so
the
utility
that
you're
trying
to
pass
through
doesn't
doesn't
function.
H
You
could
try
to
dodge
it
by
saying
that
you
are
only
going
to
share
a
single
dependency
and
national
identity
is
gonna,
be
Cobra
or
fight
or
I.
Don't
never
I
think
you
mentioned
both,
but
so
say
you
depend
on
Cola
and
you
try
to
resolve
it
in
that
way.
So
you
have
one
shared
level
of
Cobra
I'll
point
out
that
not
all
levels
of
Cobra
behave
the
same
in
fact
we're
bumping
levels
of
Cobra,
because
we
want
you
Cobra
behavior.
H
But
let's
assume
that
you
manage
to
get
everything
to
compile
because
of
the
different
behaviors
in
Cobra
at
one
time,
you're
not
guaranteed
that
the
things
you
vendor
does
still
work.
If
you
tried
to
pass
anything
through,
you
would
be
trying
to
move
everyone
to
the
same
level
of
client
go
at
the
same
time.
B
H
Right,
because
if
you
try
to
do
something
that
vendors,
you
get
two
choices,
you
either
strip
vendor
or
you
don't
stir
offender.
There's
no
strip
out
the
pieces
that
afflict
option
for
that.
You
would
be
trying
to
build
your
own
bending
tool
on
top
to
management
and
and
don't
leave
me
wrong.
I
would
love
to
have
that
vending
tool
because
I
needed
an
open
shift,
but
we
haven't
found
a
temperate,
so
I
mean.
B
How
many
so
I
know
about
G
log
says
I
did
a
proof
of
concept
and
I
didn't
exhaustively
test
all
the
options,
but
I
tried
bender
in
kubernetes
into
another
command
and
and
then
I
just
removed
G
log,
and
it
seemed
to
work
yeah,
I.
Guess
Cobra,
like
how
many
we're
talking
like
there's
three
things
you
encountered
that
were
problematic
with
the
flags
registering
cleaner.
H
It's
tempting
to
say:
let
me
get
a
bunch
of
stuff
for
free
from
from
the
existing
cube
control,
but
do
you
actually
need
those
things
for
free,
or
would
we
not
be
better
off?
Actually,
starting
from
the
perspective
of
I
have
a
new
tool?
This
is
what
I
want
to
build
and
build
it
up
with
each
piece
from
scratch.
H
H
C
H
Poten
people
want
the
old
view,
control
behavior;
they
can
use
it
if
they,
if
you
build
a
plugin
once
you
can
build
a
plug
infrastructure,
make
you
control
compliant
with
that
plug-in
infrastructure,
and
just
dynamically
include
it.
That
seems
like
a
very
achievable
thing
that
completely
divorces
you
from
the
need
to
pull
in
a
huge
vendor
tree
and
keep
yourself
up
to
date
and
staying
up
to
date
is
brutal.
Again,
we
have
experienced
downstream
an
open
shift
and
it
sounds
like
matt
also
has
had
negative
experiences.
G
I'll
add
here
the
the
dependency
management
issue
of
trying
to
just
deal
with
things
that
don't
work
rather
than
vendor.
You
know,
strip,
downstream
vendors
or
not,
is
really
really
really
a
hard
problem.
I
know,
I
wrote,
I,
write,
we've
actually
looked
into
this
and
discussed
it.
I've
discussed
this
at
length
and
white
boarded
for
quite
a
while
with
Sam
Boyer
who's
done
DEP
on.
How
do
we
deal
with
this?
It's
a
very
hard
problem.
G
B
Okay,
I
mean
the
just
come
hard
just
if
we
have
a
nice
plug-in
installation,
infrastructure
and
auto-update
and
have
the
right
infrastructure
having
the
coo
control
binary
like
installable,
and
maybe
when
you
start
the
tool
it
just
prompts.
Do
you
want
to
download
the
coo
control
plug-in
right
then?
Maybe
that
solves
the
problem
and
you
think
that
would
be
simpler.
I
think.
H
B
B
H
B
B
Yep,
that
was,
that
backgrounds
helpful.
It
sounds
like
we
generally.
There
are
no
vocal
opponents
of
the
notion
of
we
should
be
able
to
start
building
new
infrastructure
new
place,
releasing
quickly
and,
at
the
same
time
be
able
to
pull
in
the
functionality
of
coop
control.
For
so
that
people
don't
have
to
manage
a
bunch
of
binary
simultaneously
I.
H
Was
intentionally
trying
to
avoid
talking
about
those
particular
merits,
because
I
think
that's
a
great
discussion
for
the
next
year
for
a
a
wider
set
of
people
when
more
people
have
seen
this
like
when
I
came
in,
there
were
comments
from
from
two
people,
and
it's
comment
worth
more
than
you:
I'm
sure
it
just
needs
to
be
what
more
widely
distributed
and
that's
the
discussion
and
we
want
to
do
it.
I
think
Joe
mentioned
sig
architecture.
That
might
be
a
good
place
to
talk
about
it
at
some
point
as
well.
It's
a
big
choice.
B
H
H
We
have
now
an
example,
an
API
Machinery
of
how
to
actually
manage
a
generic
scale,
client
that
can
speak
to
multiple
API
versions
on
the
other
side.
So
something
like
a
generic
log
client
is
certainly
a
possibility
to
follow
a
generic
log,
client
or
a
generic
rollout,
client
or
exec
client.
All
those
things
now
we
have
patterns
for
and
a
second
ones
always
go
faster,
so
I
look
at
it
and
I
say
it
doesn't
look
like
much
progress
that
made.
B
You
know
I
I,
definitely
seen
that
progress
and
I'm
happy
to
see
it.
There's
a
I
mean
there's
things
like
the
conversion
thing,
which
I
don't
think
we
have
a
good
solution
for
how
that's
going
to
get
moved
out.
I
worry
about
the
level
of
test
coverage.
We
have
on
our
tools
like
we.
A
lot
of
stuff
has
been
developed
with
three-month
release
cycle,
with
the
assumption
people
are
gonna,
find
burping
stuff
as
they're
playing
around
with
it
during
development.
I.
B
H
Mean
other
things
out
of
point
two
in
the
last
six
months
are
patterns
for
flags
to
options
and
how
you
can
actually
have
a
standard
and
run
them
on
through
conversion
and
still
have
a
flexibility.
You
need
figured
that
out
in
the
last
six
months
and
in
the
API
server
right,
even
though
the
last
three
months
we
proved
it
so
yeah
he's
in.
G
Is
it
better
to
try
to
move
them
out
and
make
them
more
consumable
separate
than
from
trying
to
refactor
to
make
them
more
easily
consumable
downstream,
because
people
are
already
starting
to
try
and
write
their
own
functionality
to
replicate
the
parts
they
need,
because
we're
taking
a
long
time,
and
now
that
we've
got
a
nice
stable,
a
much
more
stable
foundation
for
kubernetes,
as
folks
have
been
saying
that
people
are
gonna,
build
and
operate
apps
on
top
of
kubernetes?
That
people
who
need
to
interact
with
the
API
is
much
bigger
than
those
of
us.
G
Who've
normally
been
around
and
they're
gonna
go
create
a
lot
of
their
own
tools,
because
a
lot
of
them
just
do
that
naturally,
and
because
there
are
gaps
there
and
they
need
these
common
tools
and
the
way
we
consume
them
today
isn't
very
easy.
Is
it
what
what
time
frame
is?
It
can
be
to
make
them
easier
and
is
there
another
approach
to
getting
there
more
quickly
that
also
benefits
the
CLI
I.
Think.
H
C
Yeah
I
just
wanted
to
mention
that
for
the
beginning
of
next
year,
let's
not
forget
forget
to
have
this
broader
conversation
about
you
know,
setting
some
bullet
points
as
our
major
goals
for
the
year.
For
that
seek
specifically,
we
did
that
at
the
beginning
of
last
year,
when
we
were
much
less
less
people
in
the
group,
so
I
think
that's
a
great
discussion
to
you
know
talk
about
that
in
a
broader
manner
at
the
beginning
of
next
year,
right
after
the
holiday.
I
I
was
about
to
the
propose
that
we
should
send
out
the
proposal
for
ten.
You
you've
got
a
thing
to
do:
cubed
F,
because
that
seems
like
something
that
every
single
cube
developer
any
users
were
actually
be
affected
by
in
one
way
or
the
other.
So
that's
something
that
should
be
broadly
discussed,
not
just
with
the
the
Sikh
architecture
level,
but
the
entire.
B
B
D
C
Right
thanks
Phillip
and
thanks
everyone
and
have
a
great
year
great
holidays
for
for
all
of
you
guys,
and
you
know,
I
can
still
be
reached
on
my
personal
email.
So
if
you
really
need
anything,
I'm
not
sure
I'll
be
able
to
check
my
emails
every
day
with
the
coming
adventure
but
anyways
for
sure
just
feel
free
to
ping
me.
If
you
really
need
anything-
and
it
was
great
working
with
you
guys
and
really
have
some
great
holidays
in
the
great
year,
all.