►
From YouTube: Kubernetes SIG Architecture 20190328
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
kubernetes
sig
architecture
for
Thursday,
March,
28
2019.
You
will
start
with
the
agenda.
You
can
get
to
the
agenda
by
going
to
bit
dot,
Lee,
slash
sake
architecture.
So
if
you're
listening
or
if
you're
following
along
now
that's
the
easy
way
to
get
to
it,
I
should
go
ahead
and
drop
the
agenda
into
the
comments
here.
A
We've
got
a
whole
bunch
of
things,
as
you
remember,
one
of
the
things
that
we're
doing
different
now
from
sake
architecture
meetings
in
the
past
is
we're
going
to
report
out
on
a
bunch
of
the
work
that's
been
going
on
and
maybe
have
some
discussions
on
it.
But
a
lot
of
the
work
now
isn't
debating
topics
here.
It's
things
that
have
been
happening
in
other
meetings
in
other
places
and
we're
gonna
report
and
talk
a
little
bit
about
that.
Now.
A
Today,
you're
gonna
see
some
things
about
conformance
and
code
organization,
which
are
two
of
the
sub
projects
that
we
have
and
there's
been
a
fair
amount
of
work.
That's
been
happening
with
those
and
so
for
the
first
one.
There
has
been
the
conformance
sub
project
work.
Do
we
have
somebody
on
who
can
talk
about
this
I
think
Brian
put
this
on
here,
but
I'm,
not
sure.
If
he's
here
or
Tim,
you
want
to.
C
So
this
call
was
pretty
much
this
week
was
just
basically
an
overview
of
what
we're
working
on
as
well
as
walking
through
an
example
of
a
review.
There's
a
lot
of
bike
shedding
around
what
it
means
for
some
of
the
issues
related
to
Windows
support,
and
we
had
a
couple
of
action
items.
The
key
action
lives
or
take
place
that
we
needed
was
better
documentation
about
what
is
supported.
C
What
is
not
a
supported,
we
currently
have
a
punch
list
and
we
need
to
amend
that
push
list
that
we
have
Patrick
was
going
to
update
that
list.
So
that's
the
primary
takeaway
I
think
as
we
progress
further
next
week,
will
probably
be
actually
doing
more
execution,
oriented
sort
of
grooving
and
talking
about
the
items
that
are
actually
on
the
backlog.
I
think
this
week
was
more
of
an
getting
to
know
everybody
in
understanding,
so.
A
Yeah
and
I
want
to
thank
Tim
he's,
got
everything.
Organized
we've
got
a
calendar
invitation
out.
It's
on
the
community
calendar
stuff's
recorded
I'm,
not
sure
if
you
uploaded
it
yet
Tim,
but
you're
working
on
getting
everything
uploaded.
So
we're
gonna
have
these
tracked
with
videos
and
all
that
stuff.
So
if
you
want
to
follow
along,
it's
easy
to
get
involved
in
conformance,
join
the
meetings,
get
the
recordings.
If
you
miss
it
and
get
involved
thanks
Jim
for
setting
all
that
up.
I'll.
A
B
No
I
just
wanted
to
let
people
know
that
we
did
have
the
conformance
of
project
kickoff
meeting
thanks
a
lot
to
Tim.
Definitely
we
there
was
also
the
CNCs
performance
working
group
meeting
and
I
think
we're
gonna
re-evaluate
whether
those
meetings
are
still
needed
now
that
the
sub
project
is
actually
moving.
It's
a
little
bit
difficult
to
have
like
different
sets
of
people
in
different
meetings.
B
D
B
Yeah
I
don't
know
that
we
even
need
one
per
month
the
overarching
issues
that
that's
definitely
something
that's
worth
talking
about,
but
yeah
thanks
for
offering
reduced
cadence
of
the
scenes
yes
meeting
like
quarterly
or
something
might
make
sense,
so
we
can
decide
when
we're
ready
for
new
initiatives
or
something
like
that.
But
at
the
moment
I
would
like
to
focus
on
onboarding
the
additional
conformance
tests
reviewers.
So
we
can
make
more
progress
on
improving
the
conformance
test
coverage
of
the
core
pieces
that
are
already
part
of
the
conformance
scope
sounds.
D
A
Alright
I
think
that's
it
with
conformance,
and
now
we
can
move
into
code
organization.
What
you're
gonna
see
there's
a
bit
of
in
the
agenda
today.
The
first
thing
we
have
up
is
actually
the
code
organization,
sub-project,
dims
kind
of
put
together
an
overview
document.
That's
linked
in
the
agenda.
Dims
I'm,
not
sure,
can
you
walk
us
through
this
quickly?
Yes,
how
much
time
do
I
have
I
probably
can
have
like
five
minutes.
Is
that
enough?
Okay,.
E
You
know
in
addition
to
the
main
KK
repository,
so
that
is
what
has
been
happening
so
far,
so
people
are
coordinating
among
themselves
and
not
really
organized
around
the
cig
architecture
itself.
So
there
is
work
happening,
but
not
directed
work.
So
what
we
are
trying
to
do
here
is
I
think
Clayton
put
together
this
mission
statement,
so
we
want
to
rely
on
automation
or
process
to
help
developers.
E
So
is
there
anything
that
is
missing
from
here?
That's
the
main
question
that
I
have
for
anybody
on
this
call.
Are
there
things
that
you
would
like
to
us
to
see?
Please
go
add
it
to
the
end
goal
or
chime
in
right
now.
So
how
do
we
do
it?
So
we
right
now
the
main
things
that
we
do
is
like
one
as
new
repositories
under
Cuba
notice.
E
Also,
another
ongoing
issue
that
we
have
is:
how
do
we
take
care
of
dependencies
and
create
add
new
dependencies
and
delete
existing
dependencies?
So
Jordan
has
a
kept
for
that
which
got
approved
as
of
this
morning
and
hopefully
for
1:15.
We
will
be
switching
over
to
go
about
yours
from
collab,
so
yeah
thanks
Jordan,
then
the
other
thing
that
we've
been
using
is
the
feature
branches.
We
have
to
see
what
we've
learned
from
it
so
far
and
how
we
can
take
that
forward
and
get
more
people
to
using
feature
branches.
E
If
that's
the
way
we
want
to
go,
there
is
the
next
thing
is.
The
publishing
bot
is
becoming
very
critical
to
the
staging
process,
but
there's
only
three
of
us
working
on
this
Stefan
and
Nikita
and
nobodies
I'm,
helping
with
that
as
well.
So
this
includes
trying
to
figure
out.
You
know
the
code
in
the
publishing
bot
repository
itself
as
well.
As
you
know
how
we
are
pushing
things
every
time
we
create
a
new
repository
under
staging,
we
have
some
trouble
getting
the
publishing
bot
back
up
after
you
know
creating
that
repository.
E
So
we
are
trying
to
like
add
more
validation
under
scripts,
and
things
like
that.
So
we
need
help
here.
We
need
more
people
to
help
with
the
publishing
part
as
well.
So
there's
a
bunch
of
existing
issues.
I'm
not
gonna
click
on
all
of
them,
and
there
is
a
bunch
of
problems
that
we've
documented
as
well.
So
a
simple
example
here
is
the
end-to-end
test
requires
importing
all
of
kaykai.
Then
we
talked
about
things
like
another
example.
I
will
give.
E
You
is
like
we
were
shocked
a
couple
of
days
ago
when
a
change
when
bug
fix
was
made
to
one
of
the
cloud
providers
and
that
turned
out
to
pull
in
like
fifty
thousand
lines
of
code.
Thankfully,
not
all
of
them
went
into
the
binary,
but
then
you
know
we're
shocked
and
surprised
so
things
like
that
happened
and
not
everybody's
aware
of
all
the
changes
that
go
in
so
we
need
like
concentrate
a
set
of
people
who
are
who
who
want
to
do
this
kind
of
work.
E
It's
reorganizing
a
lot
of
the
code
splitting
things
into
robot,
string
and
saying
things
like
that.
So
to
do
we
need
we
don't
have
a
label,
we
don't
have
a
project
board,
we
don't
have
a
regular
meeting.
So
that's
where
we
want
to
start.
We
want
to
start
organizing
the
work
we
want
to
identify
the
people
who
are
interested
in
this.
So
maybe
what
I'll
do
right
now
is
who
is
interested
hate.
C
E
So
one
is
I'll.
Do
an
example
right,
so
you
need
to
know
little
bit
about
how
given
it
has
works,
but
mainly
some
gulang
skills
would
be
really
helpful.
We
we're
not
looking
for
anything
more
than
that
because,
for
example,
just
getting
dependencies
up
and
running
updating
to
neural
dependencies
of
projects
like
I
walked
through
somebody
from
India
on
how
to
update
our
doctor
docker
this
morning.
So
you
know
simple
stuff.
There
is
lot
of
work,
but
there
are
more
complicated
stuff
like
we
have
two
dependencies
for
notification,
FS
notify
and
inotify.
E
B
Or
we
all
needed
here,
yeah
no
I
mean
I.
Think
one
thing
we
need
to
do
is
identify
which
things
are
higher
need
to
make
progress,
which
things
need
and
then
which
things
need
owners,
and
if
we
don't
have
owners
to
step
up
right
away,
we
should
create
roles
for
those
and
advertise
them
post
them
to
the
role
board
and
whatnot
I
do
agree
that
go
laying
skills.
B
You
know
like
go
modules
and
an
understanding
of
how
imports
work
and
the
problems
with
tingled
dependencies
are
mismatching
dependencies
mismatch
inversions.
We
want
to
version
code
that
we
publish
and
all
that
stuff
would
be
very
helpful.
You
know
all
this
stuff
has
grown
organically
according
to
the
needs
of
specific
sub
efforts
like
the
cloud
provider,
efforts
or
API
machinery
or
like
we
have
some
extraordinay.
B
So
you
know
even
just
documenting
how
the
existing
stuff
works
and
what
the
best
practices
are,
so
that
people
in
other
SIG's
you
didn't
create
their
mechanism
originally
at
least,
can
understand
what
mechanisms
are
available
and
which
ones
they
should
use
in
which
situation
and
stuff
like
that.
So
we've
discussed
doing
this
for
a
long
time,
but
a
number
of
things
are
coming
to
a
head
and
Jordans
going
to
talk
about
some
of
the
other
recent
ones
like
to
go
modules,
work
that
didn't
mention
so
yeah
we're
getting
some
volunteers
on
on
the
chat.
B
F
F
E
B
David,
what
I
would
say
is
staging
the
mechanics
of
staging
when
staging
should
be
used.
How
staging
works
today.
Making
works
better
is
part
of
this
effort
if
we
want
to
move
away
from
staging
I,
think
that
would
be
a
conclusion
that
would
come
out
of
people
working
on
this
effort,
as
opposed
to
something
we
understand
right
now.
E
Then
the
other
thing
is
people,
don't
really
know
all
the
options
that
we
have
right
now
we
have
a
third-party
directory
in
KK
and
then
we
have
opportunity
to
create
new
repository
repositories
and
accumulated
SIG's.
Then
so
there
are
even
people
are
hosting
modules
in
their
own
github
orgs,
and
then
we
are
rendering
it,
and
so
we
can
move
some
of
those
into
Cuban
artists,
six
as
well,
then
then
there
is
staging,
of
course,
so,
and
all
of
them
have
trade
offs
right.
E
Another
trade
off
that
we
were
discussing
this
morning
on
the
signal
was
there
is
a
package
called
streaming
which
people
are
using
across
the
different
CRI
implementations?
But
ours
is
the
sole
and
it's
hard
to
render
KK
so
should
we
should
it
go
into
staging,
or
should
it
go
into?
A
separate
repository
was
what
we
were
debating,
so
we
have
to
kind
of
like
come
up
with
the
checklist
or
or
some
guidance
of
some
kind
to
say.
If
this
is
the
problem
that
you're
facing
then
go.
A
I
understand
it
right
what
we're
probably
gonna
end
up
doing
here
is
just
like
we
did
with
conformance,
and
the
API
reviews
will
probably
break
off
into
a
separate
sub
meeting
once
we've
got
other
people
and
staging
and
priorities,
and
all
of
these
things
will
become
topics
and
agendas
and
we'll
have
issues
and
labels
and
boards,
as
all
of
that
work
is
worked
on
and
then
reported
out
through
here
is
that
right?
That's.
E
Right
and
then
I
think
we
figured
out
how
to
get
approvals
who
to
get
approvals
from,
and
things
like
that
right.
So
we
gonna
get
approvals
from
the
sig
first
and
then
bring
it
to
so
we'll
document
those
flows
as
well.
So
it's
easier
for
people
to
do
this,
and-
and
we
should
not
be
the
bottleneck
there.
Fantastic.
G
E
H
We
want
to
see
if
we
can
negotiate
more
aligned,
support
policies
and
so
that
it
can
inform
any
decisions
that
we
make
about
possibly
supporting
a
fourth
release
or
anything
we
want
to
do
around
our
release.
Support
timeframes
needs
to
be
informed
by
the
things
we
actually
depend
on.
So
take
a
look
at
this
document.
H
If
you
are
currently
working
on
any
of
these
efforts
to
make
sure
that
the
things
that
you're
doing
are
represented
here,
so
we
can
be
aware
of
them
or
if
you
have
information
to
contribute
about
some
of
these
dependencies
that
we
already
have
so
just
really
quickly.
The
categories
are
support
policies
for
our
existing
dependencies,
so
you
see
things
like
golang
and
STD
and
then
other
things
that
might
be
used
in
some
cases
like
docker,
container,
D
or
DNS.
We
just
want
to
make
sure
that
we
know
if
there
are
any
gaps.
H
H
We
have
adds
dimension
to
the
test
matrix,
and
so
we
want
to
figure
out
how
we
can
actually
test
the
things
we
say
we
support
the
first
two,
the
more
painful
and
the
more
gaps
we
identify
over
the
first
two,
the
more
we
will
be
motivated
to
actually
reduce
our
external
dependencies.
So
that
is
an
easy
way
to
to
improve
the
situation
rather
than
pinning
to
specific
versions
of
things
actually
deploying
api's
from
the
interactions.
So
that's
been
going
on
in
a
lot
of
areas
for
a
long
time.
H
It
may
have
actually
been
stable
and
unchanging
for
years
now,
but
we
need
to
do
the
work
to
get
those
api's
that
we
say
we
support
characterized
as
stable
if
they
actually
are
and
we're
gonna
depend
on
them.
That
way.
Other
things
in
our
release
engineering
tool
chain,
like
the
vendor
management
I,
think
next
on
the
agenda
is
actually
talking
about
the
switch
from
good
apps
to
go
modules,
I'm
sure
there
are
other
things
that
I'm
missing
so
please
add
add
to
this
list.
H
If
you
know
of
tools
that
we
are
using,
that
are
unsupported
or
could
be
collapsed
onto
more
standard
tools,
please
help
us
get
a
good
picture
of
our
dependencies
and
then
the
last
item
is
similar
to
what
Dimon's
was
talking
about.
If
there
are
dependencies
we
have
that
aren't
maintained
or
that
we
don't
need
anymore
and
we
can
collapse
identifying
those
so
that
we
can
chart
out
the
work
to
to
actually
go
do
that.
That
would
be
good
too
together.
H
H
I
think
I
sent
this
out
to
the
mailing
list
and
everyone
on
the
list
should
have
edit
access.
So
yeah,
please
put
your
name
and
ideally
put
your
name
beside,
like
particular
items
here.
If
there's
a
particular
dependency
that
you're
familiar
with
and
want
to
to
help
we
work
or
eliminate
or
bring
up
to
date,
yeah
I
would
probably
be
best
to
organize.
A
H
Basically,
go
modules
have
been
coming
for
a
while
and
as
of
go
112,
they
are
available
and
in
go
113.
They
will
be
on
by
default,
and
so
now
is
the
time
to
get
kubernetes
kubernetes
working
with
them.
In
addition
to
just
sort
of
keeping
up
with
the
go
ecosystem
and
not
breaking
people
who
start
using,
you
know,
don't
get
kubernetes
client
go
in
the
future.
They
actually
have
a
lot
of
benefits
for
us,
some
immediate
and
some
potential
future
benefits.
H
So
anyone
who
has
ever
tried
to
use
go
tips
to
update
vendor
tooling,
can
appreciate
that
our
current
tooling
is
painful
and
as
a
point
of
reference,
switching
from
go
Depp's
to
go
modules,
reduced
the
time
to
rebuild
our
vendor
directory
from
30
minutes
in
CI
to
3
minutes.
So
it's
a
10x
race
and
it's
even
faster.
If
you
have
local
caches,
so
it
makes
it
much
nicer
just
to
do
normal
things
like
bump
a
version
of
a
dependency
in
the
future.
H
If
there
are,
if
there's
adoption
of
some
of
the
more
advanced
version
aspects
of
go
modules,
we
might
be
able
to
have
multiple
versions
of
kubernetes
libraries
coexist
side
by
side
in
a
build.
That's
we
can't
do
that
yet,
for
a
few
reasons,
not
of
all
of
our
dependent,
not
all
of
our
dependencies
support,
this
advanced
versioning
and
we
have
not
yet
settled
on
switching
to
it,
because
it's
an
irreversible
decision,
but
it's
something
we
could
evaluate
and
potentially
do
in
the
future.
H
There
are
questions,
quick
questions
about
it
now,
I
I
do
plan
to
do
a
kind
of
a
tech
talk.
You
know
what
you
need
to
know
about
go
modules
that
goes
into
more
of
the
technical
details,
probably
do
that
with
sig
API
machinery
and
I'll,
try
to
make
sure
that's
recorded
and
sent
out.
So
if
you're
interested
in
something
else
about
this
works,
we'll
do
that
in
a
different
different
contexts.
A
Yeah
I
really
like
the
idea
of
a
Tech
Talk
and
having
that
recorded
because
go
modules
and
kubernetes
right,
like
kubernetes,
minor
versions,
break
the
API
expectation
that
go
modules,
have
them
submit
version
with
the
API,
and
how
are
we
going
to
handle
that
I?
Think
explaining
some
of
these
things
to
folks,
especially
for
those
who
vendor
kubernetes,
kubernetes
or
parts
of
it
will
be
really
beneficial.
I
like
that
idea.
Thank
you.
G
B
G
E
H
H
I
Want
to
point
out
and
like
there's
been
during
and
carrying
patches,
I
think
the
carrying
patches
is
the
thing
that,
like
as
a
we
need
to
be
cautious
about,
like
I,
know
lots
of
people
who
take
cube
and
they
carry
one
patch
for
their
production
environments.
So
we
should
definitely
not
do
anything
that
breaks
that
use
case
that
would
be
abusive
to
the
open
source
community
ven
during
is
definitely
a
more
nuanced
issue.
I
A
G
H
Our
code
right
talking
about
this
I
feel
like
we
have
the
mental
energy
to
keep
API
compliance
with
one
API
and
for
kubernetes.
That's
like
the
REST
API
is
right.
Those
are
the
ones
that
we
pay
attention
to
and
don't
break
and
like
put
all
our
energy
towards
preserving,
and
it's
like
doing
it
on
two
fronts.
Is
you.
H
G
Machinery,
city,
ok,
machinery
yesterday,
but
yeah
it's
like
this.
This
does
not.
This
does
not
mean
that
we
won't
break
people
like
re-import
and
then
have
to
make
much
bunch
of
code.
Changes
like
like
this
just
makes
it
possible
to
consume
with
the
new
tools.
I
think
we
can
and
should
think
about,
addressing
whether
we
like
our
users,
who
are
importing
and
using
our
gove
interfaces
or
whether
we
want
them
to
continue.
A
Here's
what
I
would
suggest
on
this?
We
move
these
conversations
about
what
we
should
do
to
either
the
code
organization
or
API
sub
projects.
I'm
gonna
suggest
coda
organization,
and
then
we
hash
those
out
there
maybe
collect
who's
importing.
What
and
how
do
we
support
them,
but
those
become
a
focused
effort
under
there
and
probably
not
one
of
the
highest
priority
efforts
there,
because
we've
got
so
many
other
things
they
need
to
do
right.
E
C
E
A
Thank
You
Jordan
for
all
for
all
the
working
and
for
carrying
this
and
putting
it
together.
This
is
nice
to
see
so
with
that.
The
next
thing
we
have
on
the
agenda
is
breaking
out
the
volume
snapshots
erd
libraries
into
a
separate
repo
using
the
publishing
pot.
Do
we
have
somebody
on
to
talk
about
that.
J
Quite
though
so
we
want
to
break
that
dependency,
and
one
of
the
ideas
I
had
was
just
to
follow.
The
way
that
we
publish
API
is
in
the
main,
kubernetes
repo,
which
is
to
put
them
into
staging
and
publish
them
out
into
another
repository
and
I
just
wanted
to
check.
If
that
is
a
good
approach
we
should
take,
or
if
there
are
other
alternatives,
we
should
consider
so.
K
One
alternative
that
we
did
consider
was
having
a
completely
separate
repository
for
the
API,
separate
from
the
snapshot
controller,
that
all
other
components
could
vendor
in
just
like
any
other
external
dependency.
The
challenge
with
that
is
that
if
we
make
any
changes
to
the
API,
ideally
we
want
to
be
able
to
test
it
immediately
with
the
external
snapshot
or
controller.
That
is
the
controller
that
implements
this
API
and
definitely
needs
to
validate
it.
K
So
if
we
put
the
API
in
a
separate
repo,
we
would
have
to
make
that
update,
committed,
update
the
dependency
on
the
external
snapshot
or
repository
to
pick
up
the
new
API
test.
It
discover
a
bug
and
then
go
back
to
the
other
repo
to
fix
it
and
ping
pong
between
them.
So
that's
why
this
approach
of
the
way
that
kubernetes
has
done
the
staging
repo
is
a
little
bit
appealing.
It's
the
API
and
the
external
controller
can
coexist.
K
B
L
H
Around
like
working
in
multiple
repos
and
coordinating
them
by
having
CI,
where
CI
is
the
thing
that
joins
and
says
right
before
I
merge
this
commit
to
the
API
Rico
I'm,
going
in
C
I'm,
going
to
actually
make
sure
that
head
of
the
clump
of
controller
repo
and
this
PR
and
the
API
reco
like
build
and
work
together
like
there
are
options.
Those
tools
today,
but
that's
the
right
way
to
do
it
well,.
B
H
E
G
Would
say
if
you're
considering
this
first
investigate
publishing
separate
gold
modules
from
the
same
repo
I,
think
that
is
a
possible
thing
that
you
can
do
with
the
module.
Stop
Jordan
probably
knows
more
I
mean,
and
it's
not
like
an
option
for
our
current
staging
to
do
that.
But
maybe,
if
you
start
fresh.
H
J
J
G
The
relationship
between
repository
and
module
is
not
one
of
them
right
and
that's.
Actually.
The
thing
that
makes
mojo
is
pretty
powerful
and
confusing,
but
I
set
and
learned
everything
about
modules
just
this
week,
and
it
only
took
me
half
a
day
to
wrap
my
head
around
basic
operation
and
convert
one.
H
Thing
that
will
make
your
life
a
lot
easier
is,
if
you
don't
have
to
publish
out
modules
to
their
own
repos.
So
if
you
actually,
if
the
canonical
place
to
consume
them,
is
within
the
single
repository
and
their
import
reflects
that,
so
it
would
be
like
I,
don't
know
what
your
ego
is,
but
kids
io
/
controllers,
/
API.
H
A
I
A
Be
a
good
experiment,
especially
for
go
modules
in
our
community
and
everything.
So
thank
you.
So,
in
the
interest
of
time
I'd
like
to
move
on
to
the
next
thing,
because
we've
got
two
more
items
in
about
15
minutes,
the
next
one
is
go,
inotify,
there's
a
link
to
an
issue
here,
I'm
not
sure
do
we
have
somebody
on
who
can
speak
to
this
yeah.
B
I
mean
knows
more
about
it
than
I
do,
but
I
added
it
to
the
agenda.
Since
the
code
organization
topic
was
the
main
topic
generally
for
today,
this
I
just
this
is
an
example
of
an
issue
that
comes
up
all
the
time.
It's
really
like
the
external
dependencies
thing
that
Jordan
talked
about,
but
we
don't
need
to
make
a
decision
here
so
much
as
just
make
sure
that
people
are
aware
of
it,
and
you
know
it's
an
example
of
the
kind
of
thing
that
we
need
to
be
able
to
sort
out
pretty
quickly.
I.
G
Think
it's
a
great
example
so
I
in
prep
for
today,
I
looked
at
the
agenda
and
I
thought
this.
One
was
interesting,
so
I
went
off
and
did
a
little
bit
of
homework
on
it.
The
long
story
short
is
the
module
that
we
use
was
deprecated
a
long
time
ago,
as
as
OS
specific
and
was
replaced
by
an
OS
agnostic,
one
which
doesn't
have
the
same
set
of
capabilities
and
I
look
at
him
go
in
go
yeah
and
the
generic
FS
notify
version
doesn't
support.
G
Some
of
the
events
that
we're
using
it's
not
clear
to
me
from
looking
at
the
FS
notify
Reaper
whether
they
want
to
like
there's.
One
comment
thread
open
that
suggested.
Maybe
they
were,
but
then
it
was
closed
with
no
merge,
and
that
was
a
year
and
a
half
ago
or
something,
and
so
maybe
they're
open
to
our
changes
and
we
should
be
deciding
sort
of
as
a
project.
Do
we
spend
energy
to
try
to
convert
upstream
projects
to
do
the
thing
you
want
to
do?
G
Do
we
just
say
fork
it
and
bring
it
into
our
own
project,
which
has
happened
through
the
beyond
I?
Don't
think
I
on
the
call,
but
Sigma
on
github
did
that
in
forked
into
his
own
space
the
old
I
notify
one.
Maybe
we
just
fix
the
bugs
there
or
we
fork
it
into
kubernetes
like
we
did
for
Kellogg
and
fix
bugs
there.
We
have
a
lot
of
options
here
and
I
think
it's
actually
really
interesting
to
think
about
the
pattern
right.
E
Just
a
little
bit
more
information,
so
at
some
so
both
cakey
and
see
adviser
both
used
one
of
these
modules
and
I
in
see
advisor
inotify
has
been
used
for
a
very
long
time.
We
switched
to
FS
notify
and
then
we've
entered
it
into
KK.
So
and
and
then
what
happened
was
we
saw
some
flakiness
in
a
CI
job,
so
we
went
back
to
the
previous
version
of
C
advisor
and
then
we
turned
back
reverted
the
C
advisor
back
to
inotify.
E
G
Court
organization,
stuff,
okay
and
it
dims,
like
I,
said
before
you
can
come
at
me.
If
it's
a
matter
of
helping
to
steer
I
can
make
time
for
that
I.
Just
I
looked
at
it
today,
I
even
started
on
the
patch
and
I,
backed
away
from
it,
because
I
realize
that
we
take
a
couple
of
days
and
time
and
I
don't
have
that
right.
A
I
So
yesterday,
discussion
was
triggered
where
someone
asked
about
the
stability
of
the
VMware
cloud
configuration,
API
or
config
file.
That's
used
to
configure
the
cloud
controller,
they
raised
a
whole
bunch
of
issues.
I
asked
her.
It
knows,
like
hatred,
did
you
the
API
review
on
changes
to
the
cloud
config
and
he
was
like
okay
and
I
didn't
and
I
was
like
I
wonder
who
did
so?
I
was
asking
it's
sig
BM
where
and
we
have
spilled
over
and
a
cloud
provider.
I
The
general
question
and
I
think
the
discussion
we
need
to
have
is
we
want
to
move
components
out?
What
are
the
implications
for
components
that
sit
on
the
edge
or
enable
the
system
that
may
or
may
not
fall
under
conformance?
Obviously,
you
know
we
want
to
make
sure
VMware
keeps
working.
The
VMware
cloud
provider
keeps
working
as
we
release
new
versions
of
kubernetes,
but
does
that
mean
that
their
config
file
format
is
an
API
that
needs
to
be
reviewed
of
the
same
way?
I
We
would
review
the
cubelet
config
to
control,
and
so
this
has
been
moved
into
an
issue
and
I
think
it'll
fall
under
the
API
review
sub-project.
It
just
wanted
to
surface
the
general
discussion
and
make
sure
folks
are
thinking
about.
We
have
core
api's,
and
then
we
have
a
whole
bunch
of
other
stuff
that
to
an
external
admin
is
an
api
that
we
support,
and
so
we
need
to
get
a
little
bit
crisper
on
where
the
boundaries
stop
as
we
move
out
of
tree.
Some
of
these
core
features
right.
E
Some
more
context
here,
so
each
of
the
cloud
providers
have
been
maintaining
their
own
context.
There
has
been
no
reviews
from
anybody
else
like
I,
don't
go
review
the
VMware
conflict,
for
example,
but
then
most
of
us
have
made
sure
I
mean
all
of
us
have
made
sure
that
the
previous
conflict
is
still
honored
when
we
make
changes
like
adding
a
new
field
by
making
use
of
proper
defaults
things
like
that,
because
you
know
it's,
we
always
get
upgrade
issues.
I
Know
if
that
helps
with
the
no
I,
the
implication
was
an
action
that
we
are
breaking
compatibility.
It
was.
This
is
a
good
example,
and
distribution
is
mostly
working
and
I'm
really
happy
that
we're
not
seeing
that,
but
it
wasn't
immediately
clear.
Looking
at
that
file
could
I
depend
on
that.
The
same
way
that
I
could
depend
on,
for
instance,
v1
core
not
getting
regressed
ever
so
that
it
is
a
consumer
administrator.
M
Thanks
man,
so
on
that
point,
dims
I
I
agree
that
thus
far,
we've
done
a
good
job.
I
think
I
was
the
one
that
raised
this
issue
to
Clayton
recently,
when
you
audit
the
existing
set
of
cloud
config
options
exposed
across
all
of
the
cloud
providers,
you
start
to
maybe
observe
things
that
are
a
bit
like
feature
gates
where
if
a
user
was
to
ever
turn
it
on
in
their
cluster,
they
could
potentially
never
be
able
to
turn
it
off
clearly
and
I.
M
Don't
think
that
type
of
understanding
of
you
know,
consumer
beware
is
being
consistently
applied
across
these
cloud
configurations
and
I
think
as
a
project.
It's
it's
fair
to
say
that
they're
both
fully
under
documented
and
yet
are
super
critical
to
running
properly
on
any
infrastructure,
and
so
I
think
like
getting
better
discipline
here
across
the
community
would
be
really
helpful.
M
I
mean
they're,
not
only
Emma
they're,
a
weird
file
format
that
it
looks
like
I
and
I,
or
some
type
of
thing
like
that.
But
if
you
look
at
the
options
within
them,
I
think
it's
I
think
you
would
see
like
you'd,
be
I.
Think
if
you
look
at
the
vSphere
one,
they
have
about
a
number
of
configuration
options.
You
look
at
a
OS
one.
There
are
things
there.
C
H
The
only
thing
I
can
think
of
is
putting
the
struts
that
load
config
in
a
package
that
focuses
review
so
that
it
needs
an
ACK
from
some
set
it
could.
It
could
be
in
cloud
it
cloud
provider,
API
reviewers
like
it,
it
could
be
a
group
dedicated
to
that,
but
just
so
that
they
don't
get
changed
accidentally
or
thoughtlessly.
Some
of
them
are
copies
of
existing
cloud
provider,
config
files,
and
so
it
might
seem
reasonable
to
take
a
current
snapshot
of
whatever
that
cloud
providers.
Config
is
and
just
drop.
E
H
M
A
N
Yeah
I,
just
I,
just
want
to
say
like
the
way
I've
been
or
the
way
we've
used
it
on
the
AWS
provider
specifically
has
been.
It
is
for
very
nice
edge
cases
that
we
acknowledge
are
important
but
which
are
not
generally,
so
we
don't
say
no
so
supporting
them
for
those
users,
but
it's
not
something
that
we
expect.
N
Most
users
should
do
so
if
you
set
it
on
your
own
head,
be
it
and
it's
not
documented,
because
here
be
dragons
and
I,
think
it
is,
it
would
be
a
perpetual
v1,
alpha-1
API
if
we
were
to
make
it
an
API.
This
has
been
used
so
far.
Now,
maybe
we
shouldn't
have
done
that,
but
that's
sort
of
where
we
are
in
terms
of
how
the
ATS
API
or
the
a
Davis
cloud
of
comfort
was
used
and
most
users
will
never
set.
C
So
I'm
struggling
with
concrete
action
items
that
we
can
advise
people
to
partake
in
other
than
yes
documentation,
but
we've
known
about
that
and
I
have
stated
that
unequivocally
in
many
different
fronts
for
a
long
time,
but
there's
also
standards
that
we
want
to
apply
so
that
they
have
some
consistency
across
providers.
But
I,
don't
I,
don't
know
how
we
do.
That
is
how
exactly
would
we
that
follow
you
at
the
core
organization
and
structure
or
API
group
who
would
take
ownership?
They
try
to
drive
progress
floors
here.