►
From YouTube: SIG Architecture meeting 20190919
Description
A
A
A
A
So,
who
do
we
have
here?
We
have
Matt
microwave
Wojtek,
okay,
Sean,
okay,
so
I
raised
this
in
the
sink
architecture
call
last
week
as
well,
there's
a
bunch
of
things
that
is
in
progress
or
people
are
thinking
about,
or
there
are
issues
log
or
there
as
initial
PR
log
and
in
some
cases
caps
as
well-
and
this
is
what
I
have
collected
over
the
last
few
days-
is
anything
missing.
That's
the
first
question:
are
these.
B
A
A
A
So
this
came
up
believe
David
gets
brought
it
up
in
the
in
the
API
missionary
meeting.
So
basically
the
talk
here
was
when
we
were
talking
about
moving,
cube,
cuddle
and
cube
idiom
out,
so
David
ping
me
asking
okay.
Do
we
eventually
want
to
move
cube,
API
server
out
of
the
main
repository
and
then,
if
so,
do
we
want
to
move
to
staging?
A
You
know
just
like
the
other
things,
so
API
server
could
have
its
own
cadence
and
then
maybe
we
could
render
things
and/or
use
when
it
eventually,
if
it
gets
moved
out,
then
could
we
render
it?
And
so
you
know
the
API
mission
missionary
can
evolve
independent
of
the
cuban
ettus
release
and
basically
we
would
write
now
what
is
happening
as
the
EP
missionary
is
using
branches
to
do
it
do
its
work,
for
example
the
so
the
suicide
apply
and
then
it's
getting
merged.
So
that
was
that's
a
current
process
that
they
are
following
right.
A
B
It
seems
like
there
are
two
for
several
of
these
things.
There
are
two
steps.
One
is
like
the
config
as
we
move
to
file
based
config
having
the
actual
go
types
for
that
config
published
out,
so
that
people
can
use.
That
makes
a
lot
of
sense.
We
already
did
that
for
most
of
the
components
there
are
staging
repos
with
config
folders
that
have
the
config
types.
B
I
think
it's
less
clear
what
should
be
done
as
far
as
moving
like
the
actual
binary,
the
code
that
you
need
to
run,
the
cube,
API
server,
I
guess
I
would
ask
the
question:
what's
the
purpose
of
moving
it
out?
Is
it
to
isolate,
cube,
API
server
and
like
make
dependencies
really
clear,
or
is
it
to
support
using
cube,
API
server
as
a
library?
If
it's
the
former,
then
I,
think
that
makes
sense?
If
it's,
the
latter,
then
I
don't
think
keeping
our
servers
ready
or
intended
to
be
used
as
a
library.
C
I
just
wanted
to
mention
that
we
already
have
like
this
generic
API
server
in
staging
right.
So
it's,
but
only
about
like
customizing
it
for
kubernetes,
more
or
less
know
what
is
missing,
which
isn't
clear
that
like
for
me
at
least,
that
it's
really
needed
for
the
reuse
purpose
it
for
yes,
astral
dimension
for
clear,
cleaner
separation.
It
might
make
sense.
B
So
yeah
I'm
not
opposed
to
isolating
it
in
a
staging
repository,
but
if
we're,
if
that's
the
purpose-
and
we
don't
want
people
to
take
it
and
build
on
it
like
a
library,
we
need
to
be
extremely
clear
about
that.
To
say
if
you
want
to
use
a
library,
use
the
API
server
library,
that's
what
it's
for,
but
cube
api
server
is,
is
only
intended
for
building
cube
API
server.
So.
B
Yeah
I
I,
remember
David,
saying
that,
but
I
didn't
hear
a
lot
else
from
him.
I
tried
to
sort
these
sort
of
in
terms
of
things
that
are
already
in
progress
and
already
have
plans
around
them
and
then
and
then
things
that
are
not
in
progress
as
far
as
I
know,
but
have
really
strong
use
cases
around
them.
I
think
there's
an
issue
open
around
the
device,
plugin
API,
where
people
who
are
consuming
it
or
saying
you
want
us
to
write
to
this
API.
B
So
what
are
we
supposed
to
do
with
that
and
that's
a
very
reasonable
issue,
and
so
we
should
publish
those
as
soon
as
possible,
the
credential
provider,
the
there
are
people
who
are
using
the
generic
ones
that
just
consume
like
a
secret
or
a
file
and
then
turn
that
into
a
registry
credential
provider
that
is
reasonable
to
want
to
reuse
as
far
as
like
making
them
pluggable
I
I,
think
that
needs
more
thought,
but
extracting
the
ones
that
people
are
reusing
to
a
place
where
they
can
more.
We
use
them
make
sense.
A
B
Don't
see
that
as
a
compelling
blocker
to
keep
Adam
I
I
think
we
talked
about
this
in
the
meeting
with
them.
Okay,
like
we,
we
have
stability,
guarantees
around
command
line
flags
and
config
files.
So
the
things
that
you're
currently
driving
via
command
line
flags,
you
should
rely
on
our
stability
guarantees
and
if
you
encounter
bugs
open
them
as
breaking
changes,
and
we
will
fix
them
and
back
with
them
so
I
I
mean
we.
B
B
A
A
D
Well,
you
know
me
personally,
I,
you
know
yeah
I'd
like
to
eventually
see
it.
You
know
be
deprecated
play,
but
I
know
the
docker
team
had
volunteered
or
at
least
I
remember
the
doctor
team
was
gonna
volunteer
to
keep
it
alive
and
it's
always
good
to
have
at
least
more
than
one.
You
know
bender,
although
they're
using
container
D.
So
it's
sort
of
you
know
what
was
the
point
right
once
we
clean
up
the
cry
API
enough
that
we
don't
need
to
go
to
doctor
directly
for
workarounds.
D
Then
then,
certainly
since
doctors
already
using
computer
D,
it
makes
sense,
go
directly
to
container
D
and
use
the
host
namespaces.
You
know
for
KS
IO
as
opposed
to,
and
you
know,
having
the
issue
of
merging.
You
know,
containers
between
you
know,
pots
and
docker
containers,
I,
don't
know,
I,
think
we
just
you
know:
let's
go
with
the
current
plan
of
cleaning
up
the
cry,
API
and
removing
all
of
the
you
know
the
hacks,
where
you
have
to
go
around
the
cry.
Api
did
a
great
doctor.
Do.
D
Think
there
was
a
concrete
list,
but
you
know,
as
I
mean
then
I
saw
a
PR
going
in
made
it
plus
one
right.
So
it's
one
of
those
never-ending
kind
of
things.
You
have
to
have
a
plan
I
think
to
to
get
off
it
completely.
You
almost
have
to
deprecate
it
first
to
get
people
to
stop
at
because
they
step
one
is
stop
adding.
D
You
know
we're
not
in
a
bad
place
with
it
right
at
this
point
in
time
and
we
are
thinning
the
gap
up
and
people
aren't
complaining
when
they
move
from
from
cryo
to
container
the
doctors
they're
pretty
much
happy
unless
they
had
some
some
hacks,
and
you
know
in
the
doctor
API
that
they
were
expecting
and
usually
it's
around
some
kind
of
feature
that
we
just
haven't
got
to
cool
it
yeah.
You
know
like
side,
clock
and
get
really
good
sidecar
support
it.
D
There
is
a
set
of
Pure's
I'm
working
on
it
might
help
that
a
little
bit.
You
know
at
the
bottom
layer
we're
all
using
Runcie
and
that
uses
it
has
hook
support.
If
we
can,
if
we
can
formalize
the
the
accessibility
of
the
hooks,
we
can
probably
get
rid
of
most
of
the
docker
API
requirements
that
customers
have
when
they
want
to.
You
know
squish
to
a
cry
implementation.
D
E
B
I
think
Cigna
can
probably
speak
better
to
the
specifics
of
like
how
to
manage
that
deprecation
from
code
organization's
perspective.
It
was
more
a
question
of
should
we
be
putting
effort
into
splitting
out
dakashin
code
just
isolate
that
like?
Are
we
planning
to
isolate
it
and
stage
it
and
publish
it
and
maintain
it
externally
or
if
the
plan
is
leave
it
more
or
less,
as
is
feature
freeze,
it
put
effort
into
CRI
like
we're
happy
to
to
leave
that
disick
node
yeah.
B
A
D
D
We
didn't
really
have
a
good
strategy
for
that
at
the
Kubler,
insofar
as
it's
soon
we've
got,
we
can
do
an
HTTP
connection
right
on
on
container
dear
in
cry,
though,
for
external
customers
that
want
to
connect
to
you
know
our
runtime
for
stats
and
things
like
that,
but
we
don't
have
a
good
way
to
to
authenticate
the
certificate
story
using
for
that.
Nor
do
we
have
it
between
any
other
node
server
in
you
know.
So
when
we
create
those
streams,
we're
doing
self
with
on
occasion,
I
think
we
need
to
fix
that.
D
D
We
create
that
screen
when
we
do
it's
a
it's
a
direct
connection
between
us
and
API
server.
That's
was
originally
proxy
through
couplet
and
when
we
do
that,
we
there's
a
there's,
an
authentication
process
for
that,
but
that
connection,
because
it's
so
raised
to
P,
we
we
just
provide
a
soulful,
dedicated
certificated
I.
B
D
B
B
A
B
A
B
B
F
A
F
A
So
the
test
framework-
let's
start
from
the
beginning
one
more
time,
so
the
cloud
providers
I'm
going
to
poke
at
the
OpenStack
one
again
to
see
if
they
made
any
progress
with
the
CSM
migration
stuff.
So
we
could
then
remove
it.
It
did
move
to
staging
unnecessarily
but
then
and
long
story.
So
the
cube
cuddle
eye.
Is
there
any
work
for
117,
Sean
yeah.
G
Actually
so
we
we've
got
90
to
95
percent
of
coop
cuddle
moved
into
staging
I
have
a
PR
out
now,
which
is
is
going
to
rewrite,
basically,
all
the
coop
cuttle
printing
tests
for
internal
resources.
It's
a
substantial
PR,
but
it
only
touched
touches
testing
code
so
for
four
coop
cuddle
convert
my
strategy
to
to
to
get
that
and
out
it's
it's
now.
Deprecated
is
to
first
create
it
as
a
plugin.
G
G
B
Have
I
have
a
plan
we'll
see
how
well
it
bears
my
it
meets.
Reality
I
started
looking
at
pulling
those
packages
out
and
there
were
a
couple
nasty
dependencies
that
you
made
me.
You
have
to
pull
in
API
server,
wait,
I,
don't
want
Sarah
might
take
a
little
longer
to
unwind
than
I
thought,
but
I
I
think
we'll
get
there
if
I
understand
it
the
the
initial
benefit
of
getting
it
into
staging,
as
so.
B
This
were
reused
by
people
who
wanted
to
use
like
yeah
and
very
commands
that
we
have
and
then
the
second
benefit
was
to
let
us
publish,
keep
control
at
different
Cadence's.
Then
right
I
mean
obviously
until
we
totally
detach
it
from
staging
we're
not
going
to
publish
at
different
Cadence's,
though
right
yeah,
the
the
short
answer,
the
are
back
stuff
looks
doable,
but
maybe
a
little
bit
involved.
It's
not
my
priority.
This
release,
but
I
could
outline
what
needed
to
be
done
and
wanted
to
put
time
into
it.
Yeah.
G
A
Okay,
so
going
down
the
list,
we
already
talked
about
the
device
plug-in
API
credential
providers.
The
cube
idiom
folks
seem
to
be
making
some
progress.
I
saw
some
updates
from
them.
The
there
was
a
cluster
API
f2f,
so
people
are
not
around
today,
so
they
are
traveling
back
home
test
framework.
I
have
to
check
on
it,
I
don't
know
how
far
they
got
or
what
their
plans
are.
Hypercube
I
will
propose
an
umbrella
issue
or
a
cap
to
outlining
the
steps,
and
we
will
create
another
repository
Oh
going
back
to
Sean
Sean.
A
A
G
B
E
Question
about
that
Jordan
yeah.
If
we
were
to
go
and
actually
separate
out
the
cube,
API
server
put
all
of
our
internal
api's
into
a
different
segregated
spot.
You
would
at
least
not
pull
in
cubelets
style
dependencies
like
the
various
runtimes
and
runtime
clients,
when
we
did
that,
would
that
make
it
easier
overall
is.
B
G
That's
it
that's
a
good
question
and
I
I
had
assumed,
so
my
my
thinking
is
that
you
know
there's.
Even
though
we
announced
deprecation
a
year
ago,
there's
still
gonna
be
people
who
depend
on
this
and
us
giving
those
people
either
a
plug-in
or
even
just
a
binary,
a
separate
binary
that
they
can
somehow
download
to
to
plug
in
that
functionality.
I
think
what
it
is
pretty
useful.
G
E
If
I
got
to
choose,
if
I
were
king
of
the
world,
he
could
do
whatever
I
wanted
in
this
repo
I
would
come
in
and
I
would
say
that
we
are
going
to
create
a
cube.
Api
server
convert
command,
that
there
would
be
a
sub
command
on
the
queue
API
server.
That
would
actually
convert
the
sources
that
it
directly
understood
that
it
would
be
shaped
such
that
it
could
be
plugged
in
as
a
plug
into
Q
Patrol.
E
If
someone
truly
wanted
to,
and
since
it
is
just
a
straight
scheme
conversion
that
it
would
be
almost
free
to
the
cube
api
server
and
a
large
cost
to
anyone
else.
If
I
were
king
of
the
world
right
I,
don't
know
how
the
general
sentiment
outside
of
myself
trends
Jordan
would
probably
have
opinions
I.
B
Like
the
idea
of
it
living
next
to
the
thing
that
already
understands
all
the
types
rather
than
trying
to
like
have
some
other
repo,
then
pull
in
and
depend
on
all
the
keep
API
server
stuff
that
that
seems
very
small
and
doable
like
the
distribution.
The
build
distribution
mechanics
could
get
a
little
messy.
E
But
it
would
like
to
having
having
thought
about
it
more
than
just
those
30
seconds.
I
thought,
while
you
were
talking,
I
would
like
to
actually
move
all
the
pipes
to
a
single
repo
for
the
command
there
right
like
I,
when
you
look
at
the
dependency
tree
of
what
that
would
actually
involve.
That
entails
nothing
more
than
a
scheme
receiver.
Basically,
you
you
have
a
way
to
build
a
scheme.
E
You
register
everything
in
the
scheme
and
if
we
look
at
what
we've
done
in
the
overshift
API
repo
we've
already
built,
what's
essentially
required
to
do
this,
so
I
think
that
would
be
more
maintainable
over
time
and
you
would
actually
get
better
traction
for
conversion
logic.
Overall,
if
you
were
able
to
give
an
example,
that
was
me.
E
E
G
E
A
H
The
the
test
one
we
talked
about
already
and
I-
think
that
one's
the
only
one
that
I
stole
still
needs
a
lot
of
movement
outside
of
this
thing,
just
in
terms
of
like
getting
agreement
and
getting
work
queued
up
for
the
hard
parts
of
it.
So
it's
on
my
list
of
things
to
do
a
little
bit
of
nagging
on
so
other
than
that.
It's
pretty
clear
best
framework
you
mean
not
just
test
frame,
but
the
Edes
start
tightening
the
dependency
a
lot
of
dependencies.
So
that's
pretty
a
second
bullet
point.
A
Okay,
okay,
so
I
think
we
are
done
with
the
rest
of
the
things
here
that
we
wanted
to
talk
about
Oh
the
client
goes
stuff.
We
didn't
talk
about.
The
client
goes
stuff
yet
so
let
me
backtrack
a
little
bit,
so
the
clients
go
I
added
this
morning,
based
on
something
that
I
saw
flowed
by
which
was
Darren's
usage
of
you
know.
You
all
saw
that
one
right,
I
think
Liggett
on
it
and
Clayton.
You
responded
on
that
one
as
well,
so
you
didn't.
You
had
mentioned
that
the
client
go
could
be
rewritten.
H
The
last
comment
in
there
was
that
the
use
case
itself
is
unsupported,
which
is
mixing
and
matching
our
core
libraries
with
different
versions.
So
it's
like
a
114
client
go
against
116
API
machinery
like
we
just
we've,
never
even
tried
to
make
that
work.
So
I
didn't
understand.
What,
if
you
like
to
ask
was
something
that
we've
repeatedly
said
is
crazy
land,
but
on
the
surface
it
looks
like
not
crazy
land
and
so
I
do
want
to
make
sure
that
we
didn't
miss
something
in
the
request
right.
A
The
one
thing
that
the
other
thing
that
came
out
of
this
request
was
the
deprecation
policy
for
the
go
API
I
remember
when
we
were
doing
static,
static
checks,
we
tried
to
keep
or
leave
the
existing
in
a
public
stuff.
You
know
not
rename
them
and
things
like
that,
but
then
Jordan
pointed
out
to
me
today
was
we
don't
have
a
policy
for
the
go
API
so
do
we
keep
going
this
way?
Do
we
need
something
at
least
for
staging?
That's
the
question.
Yes!
So.
H
A
H
A
H
A
H
H
B
So
also
to
be
clear,
like
the
API
machinery,
repos
readme
is
say,
things
like
there
are
no
compatible
compatibility
guarantees.
It's
in
direct
support
of
kubernetes
branches
will
track
kubernetes
and
be
compatible
with
that.
Repo,
like
matched
versions,
are
required
for
all
of
those
components
and
always
have
been
I.
A
To
be
clear,
I'm
not
asking
for
a
stricter
policy,
all
I'm
saying
is.
We
need
verbiage
that
we
can
point
somebody
to.
We.
B
Can
copy
the
existing
read
news
into
more
visible
places?
Yes,
that's
good
and
fine
and
give
guidance
around
how
to
achieve
those
matched
versions.
So
we
recently
updated
the
clang
go,
install
and
read
me
with
information
about
how
to
use
the
tags
that
are
given
across
those
components
but
yeah
we
can
make
that
more
visible
and
doing
that
in
parallel
with
efforts,
so
that
people
who
want
to
consume
libraries
and
not
be
broken
have
a
more
reasonable
way
to
do
that.
B
Release
to
release
I've
said
many
times,
I
think
we
have
the
energy
as
a
project
to
maintain
compatibility
to
like
one
or
two
or
three
sets
of
api's
and
for
us
the
priority
has
always
been
the
REST
API
is,
and
the
config
and
CLI
libraries
as
api's.
So
we
we
care
intensely
about
that
and
that's
where
most
of
our
compatibility
energy
goes.
A
Jordan,
can
you
point
me
to
the
existing
verbiage
and
I
will
draft
up
something
which
is
in
a
more
public
spot?
Sure,
okay,
thank
you.
Now
we
do
have
fifteen
more
minutes.
Does
anybody
want
to
go
through
the
issues
or
PRS
there's
only
a
couple
of
them.
We
are
cyclists
issues
and
not
that
many
either
we
can
just
we
don't
have
to
click
through
each
issue.
A
We'll
just
crawl
through
them
is
that
okay,
let's
do
the
PR
one
first,
so
hack
eatery
go
this
PRS
from
Ben,
but
one
of
the
questions
in
this
is
there
is
a
make
target
that
we
don't
know
who
is
using.
It's
called
make
tests
et
go
or
something
like
that.
So
how
do
we
have
precedence
of
how,
when
we
can
get
rid
of
stuff
from
the
make
target,
Jordan
I.
B
Don't
have
a
good
sense
for
that
I
would
say
sig
testing
and
anyone
downstream
of
that
I.
Don't
know
if
the
Clayton
do
you
know
if
the
conformance
stuff
uses
that
in
any
instructions
or
points
to
that
I
know,
Tim
st.
Clair
was
pushing
on
getting
testy
te
artifacts
externally
consumable,
but
I
don't
know
in
what
form
yeah.
A
The
other
one
was
the
licenses
stuff
regeneration,
I,
don't
know
if
anybody's
interested
here,
the
only
person
who
was
interested
was
Tim
Hawken
and
this
person
who's
filing.
The
issue
needed
some
help,
so
I'm
gonna
be
helping
him
and
I
put
it
on
the
code
organization,
because
the
moving
stuff
around
going
to
issues.
H
A
A
A
This
is
the
same
thing:
import
boss,
it's
related
to
that
dependencies.
Yeah
moles
get
rid
of
stating
that's,
not
gonna
have
winter
dependencies.
We
all
we
talk.
So
one
thing
here
is
and
there's
at
least
one
PR
where
I'm
pushing
back
hard
on
some.
Somebody
saying
let
me
give
you
the
background.
So
there
is
a
repository
called
Microsoft's,
slash,
HCS
shim,
and
there
is
a
PR
to
update
that
and
I'm
like
pushing
back
on
it
and
I
have
a
hold
on
that
PR.
B
I
think
it
depends
on
how
reviewable
the
change
is.
I
mean
if
and
how
critical
the
fix
is.
So
if
it's
fixing
an
issue
that
is
affecting
kubernetes
and
you're,
actually
reviewing
the
diff
of
the
dependency
and
can
see
like
okay
I
see
this
is
this
bug
is
getting
fixed
and
no
other
changes
are
coming
in
then
I
wouldn't
push
too
hard.
B
If
it's
a
big
diff
and
you
don't
have
or
the
dependency
review
or
doesn't
have
expertise
to
be
able
to
review
it,
then
I
think
we
should
err
on
the
side
of
requiring
going
to
releases
the
maintainer
of
that
dependency
tagged
if,
if
it's
really
like
getting
contentious,
and
it's
going
back
and
forth
like
actually
reaching
out
to
like
the
people
who
maintain
that
dependency
and
saying
you,
is
this
a
reasonable
place
to
snapshot
your
dependency
yet
like?
Maybe
we
could
go
there?
I
think
it
just
depends
kind
of
where,
on
the
spectrum
it
falls.
B
A
B
B
A
B
A
B
Not
exactly
code
organization,
but
it
might
show
up
in
a
few
places.
The
NGO
113
work
is
starting.
I
resolved
a
couple
cycles
that
we
had
with
some
of
our
dependencies
that
let
us
run
tidy
commands
undergo
it
look.
My
guess
is:
we
will
try
to
switch
to
go
113
without
using
modules
in
our
build
at
first,
just
because
it
minimizes
the
number
of
changes
we
need
in
our
build
scripts,
and
we
would
really
like
to
be
able
to
take
go
113
back
to
116,
because
116
will
still
be
under
support
once
go.
B
112
expires,
support,
which
is
not
a
great
position
to
be
in
so
we'll
probably
stage
this
in
in
two
phases:
the
first
phase
just
switching
to
go
115
and
then
a
second
phase
where
we
start
to
use
modules
and
our
build
and
hopefully
get
rid
of
a
lot
of
cruft
around
gopath
malarkey
in
our
building
scripts.
So.