►
From YouTube: [sig-arch] code organization sub project 2019/04/18
Description
GMT20190418 160550 SIG Archit 1920x1264
A
Out
hi
everyone:
this
is
April
18th.
This
is
the
code
organization
sub-project
of
cigar
collector.
This
is
the
kickoff
meeting,
so
welcome
everyone
and
please
be
kind
to
each
other.
When
you
get
the
ball
rolling
I
have
some
work
going
on
in
the
house.
You
might
hear
some
roofs
hanging
but
stuff
like
that.
So
if
it
happens,
Andrew
will
take
over
and
you
know,
continue
the
meeting.
So
hopefully
that'll
work
for
everybody.
So
does
everyone
have
the
link
to
the
dock
that
we'll
be
using
today?
A
I
will
just
paste
one
in
the
chat,
room
and
andrew
has
added
some
stuff
on
the
agenda.
I
think
so
maybe
we
can
just
start
there
and
then
we
can
go
over
the
rest
of
the
things
in
the
document
unless
other
people
have
things
that
we
want
to
bring
up.
Just
add
it
to
the
open
discussion
and
we
can
continue
there
so
I
think
most
of
us
know
each
other,
but
there
are
a
few
new
people.
I
would
like
to
you
know,
say
a
few
things
about
themselves
or
what
they
are
interested
in.
B
C
D
A
E
F
H
I
J
K
You
Jase
I'm
Jason,
singer,
DeMars
I'm,
a
co-chair
of
cig
architecture
and
I
sub-project
me
for
supreme
and
do
other
things
around
the
project
and
basically,
here
just
to
kind
of
help
and
do
it
a
break
in.
Thank
you.
G
L
I'm,
clean
Coleman
and
I
work
for
Red
Hat
I,
like
Tim
I,
also
am
responsible
for
some
of
this
mess.
Although
I'm
the
person
is
the
most
annoyed
by
the
mess,
so
I
am
here
to
offer
opinions
and
assistance
Clayton.
When
can
we
start
to
v2
of
cubed?
It
is
tomorrow,
we're
gonna
tear
it
all
down.
Right.
Sign
me
up
baby.
H
A
Advice
right:
okay,
we
already
wrote
it
in
bash
once
right,
right,
okay!
So
let's
let
me
quickly
look
at
what
has
been
added
so
far,
so
let's
go
to
Andrew
Andrew.
Can
you
please
give
us
an
update
on
the
cloud
product
stuff
and
where
it
is
and
how
can
we
drop
like
20
40
megabytes
of
code
from
the
hypercube
image,
yeah.
F
Sounds
good
so
yeah,
as
many
of
you
know,
you
have
a
effort
to
get
more
to
get
all
clusters
onto
out
of
tree
cloud
providers
versus
the
entry
right
now.
So
there
are
three
main
issues
that
we
need
to
tackle.
The
first
one
is
after
getting
rid
of
all
the
vendor
dependencies
to
con
SDKs
that
we
have
in
KK.
This
is
tricky
because
it's
not
just
the
main
cloud
provider
integrations
that
import
those
SDKs.
F
It's
also
all
the
entry
volume
plugins
and
the
end-to-end
testing
framework
and
the
credential
provider
in
the
queue
player,
and
so
in
order
to
remove
these
steps,
we
need
to
first
migrate
all
these
clusters
to
be
using
the
controller
manager
and
the
CSI
and
the
outer
tree
protect
provider
which
were
working
on
this
release.
The
second
problem
is
binary
sizes,
which
is
related
to
the
first,
but
it
can
sort
of
we
can
sort
of
tackle
parts
of
this
in
parallel.
F
So
this
is
just
kind
of
like
seeing
which
which
dependencies
which
imports
we
can
remove
without
kind
of
having
to
necessarily
remove
it
from
vendor.
And
the
third
problem
is
that
the
existing
cloud
provider
implementations
they
have
to
import
the
main
tree,
which
is
problematic
because
we
didn't
really
set
up
KK
to
be
imported
externally.
In
that
way,
and
so
a
lot
of
the
cloud
controller
managers
are
in
dependency,
help
because
they're
importing
you
know
kubernetes
kubernetes,
/cm,
the
petrol
manager
which
imports
you
know,
package
controller
and
pretty
much
everything
else
that
comes
with
it.
F
And
so
we
need
to
probably
come
up
with
a
plan
to
make
the
cloud
controller
manager
a
little
bit
more
friendly
from
external
consumers.
And
so,
like
we've,
had
this
discussion
in
the
cloud
provider
a
few
times
and
the
main
problem
there
is
that
the
cloud
controller
manager
imports
all
the
control
like
the
cloud
specific
controllers,
and
we
don't
really
know
what
a
good
home
for
those
are,
because
they're
still
imported
and
they
keep
controller
managers.
F
So
should
those
be
in
staging
repos,
or
should
they
be
in
core
that
we
need
to
have
a
discussion
about
so
those
are
the
three
big
things
with
respect
to
cloud
providers
and
so
I
know.
Sig
testing
is
also
doing
a
lot
of
work
and
testing
comments
to
also
like
rip
out
or
like
refactor,
the
the
either
be
testing
framework
so
that
we
can
also
pull
the
providers
out
of
that.
So
that's
kind
of
like
a
high-level
review
of
non
there.
So.
H
F
I
we
have
buy-in
from
most
people,
I
think
with
respect
to
that,
and
so
that
is
yeah.
That
is
just
to
kind
of
like
we're,
finding
it
hard
to
push
users
to
use
the
cloud
controller
manager
and,
and
especially
like
also
not
just
users,
but
developers
too,
and
the
main
point.
The
main
problematic
thing
has
been
that
developers
don't
want
to
maintain
to
get
trees
of
the
entry
cloud
provider
and
out
of
tree
and
so
by
pushing
the
legacy
entry
implementations
to
say
to
me:
PO
we're
kind
of
making
development
we're
forcing
developers.
F
It's
like
we're
saying,
like
you,
have
no
other
choice.
Now
you
have
to
build
you're
out
of
tree.
You
can
you
can
vendor
in
whatever
you
had
in
tree
and
have
the
exact
same
implementation,
and
so
that
should
be
done
in
this
in
115
you
should.
We
should
have
most
of
the
clapper
barriers
move
to
that
staging
repo
in
the
next
few
weeks.
Right.
A
F
Yeah
so
I
would
say
the
earliest
cloud
providers
are
the
hardest
because
they
have,
you
know
so
like
Google,
for
example,
has
the
SSH
tunnel,
which
adds
a
lot
more
complexity
to
the
migration,
and
but
this
just
makes
sense
right,
because
the
earlier
adopters
had
just
more
features,
tab,
beginning
so
I
think
like
I
want
to
say
the
laggers
are
Google
and
AWS
and
Azure
I'm,
just
because
they
had
the
large
user
base
and
it's
harder
to
to
gain
adoption,
especially
in
those
larger
production
environments.
F
A
F
And
so
this
is
this:
is
the
motivation
behind
the
staging
repo
as
well,
if
we
can
get
buying
from
enough
people
and
set
a
firm
date
on
this,
is
when
we're
deleting
this
repo,
then
I
think
that
would
be
a
great
foresee
function
for
all
the
different
vendors
and
providers
to
move
forward
on
this
effort.
All
right.
A
G
I
looked
at
it
briefly,
it
looks
like
some
of
the
credential
provider
implementations,
don't
actually
pull
in
like
a
lot
of
big
SDK
dependencies.
Some
of
them
are
really
lightweight
and
file
based
or
like
beak
directly
to
metadata
endpoints,
some
of
them
pull
in
like
full
SDKs,
so
yeah
I,
like
you
said.
We
just
noticed
that
last
week,
so
I
guess
identifying
which
ones
pull
in
full
SDKs
and
seeing.
If
what
the
alternatives
are
I
know
the
AWS
one
just
got
refactored,
which
is
nice,
but
it's
also
making
use
of
SDK
functionality.
G
M
Was
report
could
I
had
one
note
to
hear
like
a
good
thing
for
us
to
think
about,
as
well
as
like
when
we're
talking
about
these
organizationally?
There
are
the
the
subtle
cost
that
comes
with
a
lot
of
co-located
code
is
shared
dependencies.
Yeah
is
reminding
me
last
night,
G
RPC
Vincent's,
like
that
new
items.
F
G
A
A
G
G
G
The
sea
advisor
case
were
able
to
do
that,
but
yeah
there's
a
lot
of
hand-wringing
that
goes
on
in
any
time
with
dependency
gets
bumped
like
oh,
this
is
bringing
in
like
25
new
things
and
a
hundred
thousand
lines
of
code.
If
only
there
was
something
we
could
do
well.
Sometimes
there
is
like
asking
the
question:
why
are
these
here
and
are
they
required?
And
if
we
took
a
change
upstream
to
make
it
opt-in,
could
we
improve
the
situation?
So
it's
a
good
question
to
ask.
A
The
other
thing
that
got
nuked
was
the
CMD
CF
SSL
package
and
that
prune
quite
a
bit
of
dependencies
as
well.
Then
the
other
one
that
changed
that
went
in
last
night
was.
We
drop
the
local
and
CentOS
directories
from
under
cluster,
so
that
that
didn't
change
the
binary
size.
It's
tough,
but
you
know
that
was
that
has
been
pending
for
a
long
time,
a
duplication
in
deprecated
more
than
a
year
ago.
So
we
removed
that
there
was
one
more
problem
that
came
up
in
signal.
I,
think
Tim,
all
clear.
G
So
I
did
a
similar
thing
to
what
we
did
with
their
runtimes
to
make
the
cloud
provider
information
populating
in
C
advisor,
opt
in
and
I
think
this
didn't
actually
end
up,
dropping
any
lines
of
code,
because
the
cloud
provider
functionality
is
already
pulled
in
for
that.
But
this
would
be
sort
of
a
prerequisite
to
getting
rid
of
this.
G
The
decision
is
that
we
need
to
follow
it.
Even
though
we
we
don't
really
think
anyone
is
using
those
endpoints,
but
we
don't
have
any
way
to
be
sure.
So
we're
gonna
follow
a
deprecation
timeline
on
that
and
I'm
gonna
actually
try
and
rip
out
all
of
the
cfir
endpoints
that
are
exposed
by
the
couplet
I,
guess
that
would
happen
in
117
and.
A
G
A
K
I
I
had
a
quick
question:
I'll
do
my
normal,
my
normal
thing,
which
is:
do
we
have
a
like
an
analysis
of
where
the
problems
are
here?
I
mean
it's
great
that
we
making
progress
on
these
various
friends,
but
I
still
don't
have
a
like
a
big
picture
view
of
here's
the
size
of
the
problem,
and
these
are
the
bits
that
we've
you
know
fixed,
and
these
are
the
bits
we
still
have
to
fix.
Do
we
have
anything
like
that?
Yet
yeah.
A
So
Quinton
the
agenda
doc,
if
you
scroll
down
to
the
next
page,
there's
a
like
two
or
three
more
pages,
with
the
things
that
we
have
already
collected,
we
need
to
triage
it
prune
it
and
convert
it
into
milestones
and
project
boards
and
things
like
that.
But
we
are
collecting
information.
So
if
you
know
of
additional
stuff,
you
can
just
add
it
to
the
dock.
At
the
bottom,
we.
G
Similarly,
things
that
we
have
done
recently
is
just
sort
of
slicing
and
dicing
our
dependency
tree
a
few
different
ways.
So,
looking
at
all
of
the
vendor
packages
that
get
built
and
saying
what
are
the
you're,
the
biggest
20,
so
the
top
20
that
contribute
to
code
size,
another
is
just
looking
at
like
shared
dependencies.
That's
one
of
the
items
later
like
what
are
dependencies
that
we
get
a
lot
of
conflicts
on
and
what
is
causing
this
console
so
we're
trying
to
slice
and
dice
that
a
few
different
ways.
H
That
was
actually
what
I
was
gonna
ask
Jordan.
Do
you
have
tools
for
this
like
we
have
something
that'll
draw
a
dag
of
all
of
our
dependencies,
so
we
can
see
the
number
of
incoming
links
and
the
relative
size
in
terms
of
LOC
I'm
thinking
like
p
prof
or
something
would
make
a
really
good
visualization
of
this.
So
we
could
put
a
bounty
on
cutting
off
the
biggest
things
that
have
the
least
number
of
inbound
links,
whether
that's
by
rewriting
those
libraries
ourselves
or
just
breaking
dependencies
or
whatever
right,
yeah.
G
H
We
have
a
comprehension
of
the
relative
risk
of
these
dependencies
like
philosophically,
do
we
think
big
dependencies
are
riskier
or
less
maintained
dependencies
are
riskier
or
like
which
of
the
things
that
if
we
were
to
put
a
bounty
on
this,
like
suppose,
we've
literally
put
money
on
this?
What
are
the
things
we
would
advise
people
to
go
after
well,.
O
Small
stuff
is
easier
to
review.
Well,
I.
Think
well,
no,
but
my
point
is
likely.
If
you
look
like
the
NPM
attack,
it
was
you
know
some
random
library
that
nobody's
maintaining
anymore.
If
someone
else
comes
over
I,
think
that
mean
I
think
a
cod
provider
is
maybe
giants,
but
it's
not
a
lot
less
likely
than
it's
gonna
get
compromised
exam
that.
F
G
J
M
We
I
would
go
even
further.
David
I
mean
I
would
guess
that
the
vast
majority
of
vendor
bumps
are
done
without
any
review
on
it
or
even
understanding
of
the
changes
beyond
look
at
Eric
dirty
laundry
yeah,
but
beyond
the
first
order,
and
so
effectively
it
is
a
pull
and
pray
and
all
we're
relying
on
a
static
compilation.
So
there
is
ended
our
test,
somehow
s--
yeah,
and
so
we
definitely
maybe
need
to
be
a
little
bit
more
like
whatever
the
process
is
at
home.
M
G
If
I,
if
I,
had
to
pick
a
metric
to
rank
these
by,
it
would
be
like
number
of
outgoing
dependencies
like
I
was
gonna,
add
it
something
is
big,
but
has
few
transitive
dependencies
like
worst
case,
even
if
it's
unmaintained,
we
can
fork
it
and
fix
a
security
bug
if
you
know,
but
the
more
ripples
it
has
the
harder.
It
makes
everything
to
find
a
level
that
we
can
live
with.
I
think
there's.
M
There
was
an
attempt
to
like
we
care
about
it
because
it
helps
open
ship
in
kubernetes,
and
so
it
might
be
useful
for
if
there's
others
out
there,
who
also
have
like
data
approaches
to
this
and
are
willing
to
share
it.
We'd
certainly
be
happy
I'd
be
happy
to
make
that
team
come,
and
you
know
offer
the
reports,
because
they're,
basically
just
gonna
turn
around
offer.
The
same
reports
to
us
like
I'll.
M
Exactly
and
we
don't
do
us
for
us-
it's
not
like
a
private
or
a
secured
like
we
don't
we.
This
isn't
like
a
an
internal
competitive
advantage
of
things.
So
again,
no
that's
a
sensitive
topic
for
some
folks,
but
we'd,
be
happy
to
you
know,
participate
in
and
offer
some
resources
there
and
I
can
force
them
to
do
that.
I.
A
G
A
Do
you
still
have
the
tools
for
generating
that
information
again,
Tim
I
think
it
was
a
bash
script,
but
I
can
dig
it
up.
Okay,
thank
you.
That'll
be
helpful
and
I
I
remember.
Alec
was
doing
some
mod
graph
analysis.
At
least
he
came
up
with
like
one
full-blown
app.
You
know
really
really
big
complicated,
twisty,
spaghetti
stuff.
B
E
G
It'd
be
great
to
have
the
methods
that
we
used
for
these
captured
so
that
we
could
really
easily
like
run
all
of
these
things
like
what
are
the
top
20
built
packages
by
size,
what
are
the
packages
of
the
most
incoming
links
or
the
packages
with
the
most
outgoing
links?
We
could
capture
that
on
demand
or
regularly
so
that
we
could
actually
see
this
over
time
and
or
even
evaluate
it
like
if
we're
trying
to
decide.
Is
this
new
thing
we're
wanting
to
pull
an
or
update
like
helping
us
or
hurting
us?
G
We
could
even
we
have
guidelines
for
review
dependency
updates
like
make
sure
the
license
is
good,
make
sure
the
thing
is
maintained,
ish
I,
don't
know
how
we
determine
that,
but,
and
so
having
something
like
this
like
run
this
on
master
and
run
this
on
the
perspective,
pull
request
and
like
is
this:
taking
us
in
a
good
direction,
or
that
direction
would
be
helpful.
Okay,.
A
F
H
Thing
that
I
can
add,
we
can
sort
of
play
the
800-pound
gorilla
a
little
bit
and
if
we
can
propose
interfaces
and
D
couplings
in
upstream
projects,
I
think
we
carry
a
little
bit
of
weight
in
making
those
sort
of
proposals.
So
we
should
consider
that
to
be
on
the
table.
As
we
look
at
these
things
too.
Right.
A
So
specific
feedback
there
Tim
is
the
the
Google
alpha.
Zero
API
was
like
20
Meg,
that
that
was
the
biggest
one
that
we
have,
and
so
we
raised
the
question
that
do
we
actually
need
the
alpha
one
it
to
be
present
in
the
KK
or
whether
all
the
things
that
were
in
alpha
one
have
already
gone
to
beta
or
not,
and
we
don't
know
the
answer
to
that
question
yet
then
the
follow-up
was
like.
H
Yeah,
absolutely
sorry
I
just
wanted
to
throw
in
that
last
point,
because
I
forgot
to
mention
it
earlier,
like
as
we
were
as
we're.
Looking
at
these
things
upstream,
like
we
don't
we're
not
always
behold
into
trying
to
change
or
just
drop
dependencies,
we
can
actually
try
to
get
interfaces.
Like
think
logging
right,
the
conversation
that's
going
on
around
that
you
know
concretely,
like
I,
think
we
have
every
single
semver
library
that
exists.
Vendored
in
I
could
not
even
kidding
I
think
we
have
four
separate
semver
libraries
like
that
seems
right
for
an
API.
A
G
G
That's
that's
what
the
code
is
like
like,
so
it's
10,000
lines
of
one
file
versus
10,000
lines
of
a
completely
different
file,
that's
unreadable
and
it's
generated
proto
serialization,
and
so
it's
basically
unreviewable
and
it's
kind
of
down
in
the
guts
of
all
of
these
all
of
these
libraries
and
so
there's
actually
a
a
pull
request
right
now
to
bump
one
of
the
cloud
provider,
libraries
and
it
wants
a
newer
version
of
proto
and
and
so
we're
trying
to
figure
out.
Is
this
gonna
be
safe
for
a
TD
and
safe
for
our
libraries?
G
N
A
M
Yeah
so,
and
actually
we'll
take
some
responsibility
or
blame,
as
you
will
for
this
evening,
just
because
I
got
really
annoyed,
while
working
with
the
conformance
tests.
Many
of
our
ete
Suites
take
very
intensely
deep
dependencies
and
the
kubernetes
kubernetes
codebase
just
to
get
like
two
constants.
So
we
it's
a
and
some
of
them
use
utility
classes,
which
means
you
know
the
conformance
tests
would
could
break
fairly
easily.
If
someone's
refactoring
kick
in
I'll
change,
a
constant
constants
totally
can
change.
M
That's
the
what
a
constant
is
right,
and
so
there
was
a
general
code
cleanliness
issue
around
that
there's
also
one
of
you
know
further
conformance
tests
because
they're
the
IDI
tests
are
intended
to
run
outside
the
cluster.
There
really
is
no
reason
for
them
to
rely
on
kubernetes
kubernetes,
and
so
one
of
the
topics
that
I
wanted
the
the
working
group
to
at
least
discuss
and
get
general
input
on
mostly
because,
as
a
clean
up
is
you
know
if
we
decide
that
living
you
know
things
to
stage.
It
makes
sense.
M
The
conformance
tests
in
the
IDI
tests
are
good
candidate
for
that,
just
from
the
pure
trimming
dependencies,
keeping
it
isolated
from
the
codebase
to
some
degree,
making
it
easier
for
others
to
go,
grab
and
depend
on
the
conformance
tests.
I,
don't
know
how
many
vendors
have
completely
reimplemented
the
conformance
suite
in
another
form,
so
like
various
distros
that
want
to
add
their
own
IDI
test,
whether
they
do
that
in
their
own
form.
M
We
on
the
openshift
side,
just
reuse,
the
code,
because
it's
easy,
and
so
we
reuse
some
of
the
core
libraries
that's
more
in
the
testing
Commons
effort,
but
at
least
gets
into.
The
idea
of
you
know.
Just
like
our
client.
Libraries
are
reasonably
well
isolated
components.
Should
our
tests
that
have
the
higher
bar
be
reasonably
well
isolated
as
well.
So.
A
The
good
news
here
on
this
one
clayton
is
timothy
is
running
a
group
that
is
looking
into
this
and
they
are
trying
to
remove
the
framework
directory
into
a
separate
staging
reporters
staging
area
and
to
that
effort.
The
first
piece
of
thing
that
they
were
trying
to
do
is
like
what
are
the
utility
methods
that
are
there
in
framework
directory
and
how
do
we?
Where
do
they
actually
belong?
Do
they
belong
in
the
framework
or
do
they
need
to
be
moved
elsewhere?
A
J
Did
you
pick
something
David
so
I've
seen
some
of
the
results
of
that
I
do
think
we
need
it's
some
guidance
when
people
start
breaking
out
flags
that
you
know
we
have
guidance
on
flags
and
says:
don't
register
unconditionally,
don't
auto
register
these
flags.
Let
me
choose
my
flag
sets
and
let's
follow
the
best
practices
for
that
we've
developed
over
time
for
having
separation
between
flags
and
configuration,
and
so
I
have
seen
it
I
have
seen
it
because
it
has
bit
me-
and
it
has
bit
me
because
it
unconditionally
registers
flags.
M
And
I
mean,
if
there's
nobody
else
here,
I
think,
like
the
meta
question
that
came
up
was
the
discussion
that
we
will.
You
know
staging
and
then
just
you
know,
be
strict
about
dependencies,
which
is
a
fairly
easy
move,
but
then
it
got
into
the
well.
We
haven't
really
documented
what
goes
into
staging
and
then
there's
another
one
which
is,
you
know
like
what
things
should
go
into
staging
what
things
don't
we
certainly
evolved
on
that
I.
M
Think
getting
you
know
this
working
group
making
some
suggestions
like
a
top
level,
like
we
believe
in
staging
for
this
class
of
things
and
getting
that
in
writing
would
make
it
easier
for
folks
doing
random
cleanups
to
kind
of
have
an
anchor
and
know
what
to
suggest
and
where
to
suggest
it.
We
certainly
like
we
have
common
libraries,
but
we
also
just
have
cleanliness,
and
this
was
in
that
weird
cleanliness
and
common
utilities
case.
F
Yeah
and
I've
been
pretty
involved
in
the
testing
comments.
Effort
too,
and
it's
it's
gaining
a
lot
of
traction
and
we're
the
biggest
motivation
behind
the
testing
comments.
Effort
we're
using
is
is
related
to
the
cloud,
the
cloud
provider
efforts
because
there
is
the
separate
cloud
provider
framework
in
there.
So
the
approach
that
Sinclair
is
suggesting
is
we
first
go
through
all
the
utility
files
and
remove
methods
that
we
don't
even
need
in
the
first
place
and
then
do
a
second
pass
and
try
to
remove
some
of
those
dependencies.
A
B
Need
it
we're
discussing
with
the
signal
to
move
the
sleeps
library
out
of
kubernetes
two
minutes,
because
CRI
API
was
recently
moved
out
and
then
we
find
found
out
that
this
is
also
a
huge
dependency
that
should
be
moved
out.
But
so
we
were
not
finding
the
right
home
to
bear
to
move
move
it
out.
So
we
had
to
open
up
cap
based
on
the
cygnets
suggestion
to
find
a
right
place
for
it
and
then
Tim
sighs
is
that
that
maybe
we
should
rethink
that.
We
need
this
streaming
lighted
library
now
or
not.
A
F
A
G
We've
now
moved
to
a
model
by
default
where
the
CRI
streaming
proxy
is
only
over
local
host
and
that
all
gets
proxied
through
the
cubelet.
This
was
for
security
reasons,
because
it
turned
out
to
be
pretty
hard
to
get
that
redirection
from
the
api
server
right
and
also
somewhat
fyz
the
authentication
piece
of
it
a
lot
since
we
can
just
piggyback
on
the
cubelet
for
that,
and
so
as
long
as
the
cubelet
is
proxying,
that
I'd
prefer
for
those
streaming
calls
to
just
go
directly
through
the
CRI.
G
A
K
A
A
A
A
The
idea
is
to
try
to
figure
out
how
we
can
do
this
in
bite-sized
pieces
that
we
can
go
get
people
to
help
is,
since
we
are
talking
like
big-picture
stuff,
it's
very
hard
to
bring
in
new
people
and
and
unless
we
say
like
a
logo,
update
dakka
dakka
dependency,
and
then
he
goes
around
doing
that
and
faces
the
problems
and
then
comes
back
to
us
and
say
them.
This
works.
This
doesn't
work
so
that
that
is
what
I
would
like
to
try
to
do
here.
A
H
A
Absolutely
so,
as
part
of
that,
I
also
want
to
include
things
like
developing
on
feature
branches
and
what
it
takes
as
well.
So
I
I
do
want
to
start
suggesting
topics
like
that
and
invite
Daniel,
and
you
know
folks
who
are
working
on
the
server-side
apply.
You
know
people
like
that,
and
so
this
this
was
like
Jordan
and
I
have
been
dealing
with
these
culling
dependencies
for
a
few
weeks
now.
A
So
if
that
was
natural
to
start
from
here,
plus
being
the
existing
base
with
the
cloud
provider
stuff
which
has
been
going
for
a
year
years
now
sigh.
So
that's
where
we
started,
but
yes,
I
do
want
to
cover
all
the
things
that
we
talked
about
in
the
doc
over
a
period
of
time
and
I
would
like
to
eat.
I
wanted
to
set
up
the
cadence
first
and
then
invite
people
around
specific
topics.
So
we
can
invite
the
testing
Commons
people
to
give
an
update,
and
you
know
people
who
are
working.
A
Also,
the
storage
right
storage
people
have
been
refactoring
stuff
for
a
while
now
and
I
want
to
hear
from
them
on
what
worked
for
them.
What
didn't
work
for
them?
They
have
an
explosion
of
you,
know
github
proposed
now,
and
they
are
in
the
process
of
culling
some
of
them
now.
So
there
is
stuff
that
we
need
to
learn
from
them
and
use
in
the
rest
of
the
six
as
well.
Does
that
helped?
M
For
people
I
do
think
we
need
to
be.
We
need
to
have
some
heuristics
as
part
of
the
process,
for
what
we're
doing
that
we're
actually
making
things
better
for
a
specific
set
of
people
and
that
we're
not
just
saying
I'll
go
well.
This
makes
our
lives
better.
We're
gonna
split
up
everybody
in
the
community
or
people
who
inherit
from
client
tools
or
people
who
have
to
test
kubernetes
or
like
just
make
sure
that
we've
got
a
set
of
people
that
we're
trying
to
satisfy.
We
consider
how
each
of
the
changes
impacts
them.
M
And
I
think
I
think
we
just
want
to
like
have
like
this
is
an
ask
for
all
working
groups.
Is
you
have
a
clearly
defined
purpose?
You
have
goals,
we
all
want
to
go,
make
things
better,
but
we
should
at
least
have
something
that
we
can
fall
back
on,
which
is
okay
like
this
is
me,
make
everybody's
life
easier,
and
someone
tells
us
that
it's
not.
How
do
we?
How
do
we
have
that
discussion?
Investor.
A
You
know,
but
we've
been
like
carrying
along
the
docker
shim,
but
it's
been
dragging
us
back
since
a
lot
of
people
are
not
using
docker
in
their
environments
right.
So
that
is
an
example
of
something
that
we
are
continuing
to
carry
the
debt
and
it
would
make
things
simpler
for
signal,
but
the
topic
wasn't
raised
and
the
you
know
we
are
not
actively
working
on
that
topic.
So
I
think
we
need
to
bring
things
surface
things
like
that
into
caps
and
find
owners
for
for
that.
A
But
then
we
also
had
missteps
where,
for
example,
the
CRD
stuff
we
are
still
trying
to
figure
out.
How
do
we
install
the
state?
Cid
right?
I,
don't
think
that's
a
solved
problem.
Yet
so,
and
then
we
told
the
CSI
folks
to
move
things
to
see
our
DS.
But
then
you
know
now
some
of
their
stuff
went
back
into
KK
and
the
runtime
class
is
also
some
of
it
is
going
to
go
back
into
KK.
So
I
want
to
avoid
that
where
we
are
giving
conflicting
information.
G
The
thing
that
I
want
to
make
sure
that
we
keep
in
mind
is
that
sometimes
it's
not
a
trade-off
between
something,
that's
easy
for
us
or
easy
for
someone
else.
Sometimes
it's
like
this
makes
it
possible
for
us
to
maintain
kubernetes
kubernetes
and
if
you
have
like
six
different
things
that
all
require
different
levels
of
something.
Sometimes
it
is
literally
impossible
to
maintain
great
idea,
kubernetes
and
so
does
splitting
something
out
makes
life
more
painful
for
someone
possibly
but
there's
not
always
an
alternative.
So
I
agree
that
we
don't
want
to
do
it
blindly.
G
We
don't
want
to
do
it
blindly,
it's
fun,
but
so
we
should
just
have
good
reasons
for
the
things
that
we're
splitting
out
like
we
were
splitting
this
out,
because
we
can't
maintain
the
tree
the
way
it
is,
and
I
will
say
that
it
is
much
easier
to
catch
these
things
when
they're
coming
in
then
when
they
are
in
and
are
causing
problems,
and
so
I
think
something
we
that
would
help
us.
A
lot
is
getting
good
information
for
people
reviewing
things
getting
introduced,
so
we
can
say
not
just
okay.