►
From YouTube: sigs.k8s.io/kind 2019-04-08
A
B
A
B
A
A
A
A
B
A
Stack
you
it
and
then
you
still
need
code.
So
I
guess
my
concerns
are
one
we're
increasing,
so
something
I've
been
looking
at
that
so
slightly
orthogonal
this,
but
going
to
relate.
First
of
all
is
just
that,
as
we
changed,
how
we're
doing
things
with
the
images
we
need
a
better
way
to
keep
track
of
this
so
kind
can
support
it
properly.
A
So
what
I'm
looking
at
is
labeling
images
with
some
metadata,
so
we
can
say,
like
this
image
is
built
with
calcio
or
something
like
that,
or
this
image
is
built
with
this
version
of
how
we
decided
to
handle
CNI
and
that
way
that
future
kind
versions
can
detect
like.
Oh,
this
is
the
second
iteration
of
how
we
were
managing
CN
eyes.
I
need
to
work
in
this
mode.
If
I'm
gonna
boot
this
image-
or
at
very
least
they
can
say
this
is
too
old.
This
images
hold.
B
A
But
so
this
ties
into
that,
though
I
like
it.
So
because
if
we
do
that,
then
we
can
change
at
how
some
of
the
stuff
works
and
have
an
avenue
to
at
least
know
like
which
version
was
supposed
to
work,
because
we
have
a
problem
right
now
like
where
we
just
assume
that
certain
pads
are
gonna
exist
and
that
we're
not
gonna
break
that.
A
And
then
we
realize
oh
well
having
one
CNI
manifest,
wasn't
good
enough
need
like
NC
and
I
manifest
and
detecting
if
the
image
was
built
before
after
that,
isn't
very
good
right
now.
So
if
we
were
just
like,
oh
we're,
gonna
grab
the
ipv6
manifest
and
we'll
just
say
it
doesn't
exist
and
we'll
give
the
user
some
terrible
error.
A
I've
been
thinking
about
how
to
solve
it
and
yeah
CRI
is
which
or
that,
but
not
actually
the
CRA
itself.
I'm
looking
at
eliminating
date
are
loading
at
runtime
and
having
them
already
loaded,
and
so
that
will
change
boot
up
behavior
and
ideally,
we
still
don't
need
to
break.
We
can
support
the
old
behavior,
but
we
need
to
like.
A
We
need
to
be
able
to
identify
that,
like
we
switch
behavior
in
this
image
or
we
switch
the
expected
behavior
and
the
C&I
stuff
is
gonna
fall
in
that
the
other
concern
that
I've
had
for
the
scene,
I
that
I
haven't
figured
out
what
to
do
with.
So
it
looks
like
right
now.
We
you've
just
went
with
a
default
subnets
for
everything
which
probably
should
be
great
I,
agree
with
that,
but
think
about
s'more
and
a
better
believe
someone's
going
to
start
hacking
on.
A
You
know,
configurable,
subnets
and
now
I
need
some
kind
of
API
for
controlling.
You
see
that
people
already
today,
like
a
couple
of
places
that
we
didn't
intend
for
these
things
to
really
be
api's
but
like.
If
you
want
to
play
with
it,
you
can
like
use
an
extra
mount
to
bind
mount
a
C&I
manifest,
and
then
it
will
boot
that
instead,
which
works,
but
it's
not
really
a
like
an
API.
A
So
it's
really
hard
for
us
to
migrate
because
we're
gonna
migrate,
some
things
like
the
biggest
change
that
is
definitely
breaking
I've
been
thinking
about
is
you
we
switched
the
CRI
and
we
stopped
doing
carbo-loading
that
potentially
breaks
a
bunch
of
people
that
are
depending
on
like
oh
I,
can
just
drop
this
manifest
here,
and
these
tars
there
and
boom
it
does
stuff
or
like
oh
I,
can
just
talk
to
doctor
on
the
node
except
kubernetes.
Isn't
talking
to
docker
and
container
D?
Has
different
storage
so
I
think
to
do
this
migration?
A
Well,
we
probably
need
to
introduce
that
mechanism
to
say
like
an
almost
like
an
API
version
for
the
image
or
something
like
that
and
I've
been
trying
to
think
about
how
to
do
that.
Well,
one
way
is
to
go
about
just
versions.
Another
way
is
to
like
tag
sort
of
like
maybe
like
like
a
list
of
capabilities,
or
something
like
that,
like
this
image
is
container
D.
This
image
is,
you
know,
calico
multi,
manifest
something
like
that,
and
then
we
could
at
the
beginning
of
the
cluster,
create
process.
A
A
Other
hand,
if
we
do
a
simple
version
number
we
can
also
just
that
works
pretty
well
for
the
path
of
like
okay.
This
is
a
v2
kind
image.
We
don't
support
v1
images
anymore
or
something
like
that.
I'm,
not
a
huge
fan
of
that,
though,
because
there's
not
a
lot
of
code
so
far
to
support
old
behavior
like
how
we
used
to
just
install
weave
and
that's
a
pretty
small
piece
of
code,
but
it
means
that,
like,
if
you
upgrade
kind
and
you
keep
using
your
setup
with
the
same
command
line,
it
still
works.
A
I
would
like
to
get
people
in
the
habit
of
just
like
installing
the
latest
release
we
periodically
get.
Oh,
this
thing
doesn't
work
and
it
turns
out
we
already
patched
it
and
I,
don't
think
we
have
enough
capacity
to
really
handle
like
running
old
branches
and
like
doing
patch
releases
old
branches
and
that
sort
of
thing.
A
A
A
The
problem
with
that
is,
then
the
the
people's
templates
depend
on,
like
which
things
you're
going
to
supply
them,
and
then
that's
a
new
area
that
we
have
to
like,
not
break
so.
Similarly,
if
we
do
this
internally
I'm
concerned
they're
like
starting
out
but
I'm
wondering,
if
maybe
we
can
just
do
it
internally,
not
document
it,
and
then
people
can
use
it.
But
it's
understood
that,
like
we're
going
to
change
the
template
mechanism,
I
mean
it's
not
stable.
D
D
A
C
A
E
Essentially
is
like
a
cluster
operator:
I've
always
found
that,
like
installing
a
different
CNI
is
in
most
instances
for
most
modern
ones,
just
kind
of
installing
a
manifest
and
maybe
changing
some
flags
on
an
API
server
or
a
controller
manager.
I
kind
of
in
a
way
could
kind
just
provide
a
way
to
say,
like
disable
the
default
CNI
that
we
do
support
them
and
provider
like
almost
here
is
in
some
may
be
some
clever
away.
E
But
here
is
a
load
of
yeah
Mille
go
and
apply
this
at
point
where
you
would
normally
install
weave
and
then
it's
kind
of
like
up
to
the
cluster
operator
to
make
sure
I'm
up
to
the
person
running
it
to
make
sure
that
the
pod
CIDR
is
synced
up
between
that
template.
They
provide
and
atom
config,
but
at
least
initially
that
would
unblock
this
discussion
and
then
templating
and
everything
else
could
kind
of
come
later.
E
F
E
But
I
think
that's
kind
of
unrelated
to
this
problem
in
a
way
like
we've
already
got
that
problem
today,
some
create
they
need
to
change
okay
and,
if
they're,
providing
their
own
CNI
config
at
least
initially
just
having
a
way
to,
and
it
fits
in
quite
nicely
the
phases
idea,
because
it's
effectively
a
fades
and
you're
just
saying.
Okay,
just
don't
do
what
you'd
normally
do
and
here's
a
load
of
yeah
man
instead
and
it
yeah
we're
only
exposing
here's,
some
yeah
Mille
but
I-
think
for
a
lot
of
users.
E
A
Yeah
I
mean
you,
don't
have
to
do
that,
but
like
if
you're
like
hacking
on
your
CNI
or
something
you
probably
like,
which
is
would
probably
the
main
use
case
for
not
using
the
default.
Cni
is,
if
you're,
some
other
user
near
like
hacking
on
kubernetes
or
something
you
shouldn't
be
worried
about
which
he
and
I
were
in
a
point
just
that
it
works.
C
C
The
way
I
see
this
is
you
can
basically
hard
code,
a
bunch
of
CN
eyes
as
an
option
predefined.
There
spiders
also
filled
in
the
kind
of
config
in
networking,
for
instance,
to
default
to
a
different
value
if
you
want,
but
also
provide
a
face
that
you
can
skip
the
default
completely
and
of
all
these
install
their
own.
So
there
are
multiple
layers
to
the
problem
here:
yeah.
A
Because
I
guess
the
biggest
problem
is
that
if
we
are
installing
the
I
think
we
probably
I
think
the
simple
sensor
is
to
just
tell
people
if
you
want
to
change
the
ciders
in
the
QA
team
config.
You
need
to
install
the
cni
yourself
as
well
and
we'll
come
back
to
this
problem
later,
because
otherwise
we
have
to
deal
with
the
problem
of
syncing
those
up
and
then
now
we
need
some
mechanism
to
like
correctly
configure
the
C&I
and,
like
that's
a
lot
more
complicated
than
the
original
problem.
A
B
E
It's
yeah
go
ahead,
Josh
if
kind
remains
kind
of
in
its
opinionated
state
right.
Now,
though,
we
could
teach
it
how
to
do
templating
to
say
like
what
we
choose
to
be
the
default
that
most
uses.
One
so
say:
we've
now,
whatever
I
like
punting
that
discussion,
so
basically
we
teach
we
teach
kind
how
to
handle
the
pod
subnet
by
default.
If
they
don't
suppress
it
specify
and
configure
moles
differently,
then
they
handle
it
because
we
don't
know
how
to
deal
with
them
right.
A
But
in
that
case
we
also
probably
want
to
make
a
like
top
little
networking
options
specifically
for
this.
What
you
get
I
think
can
I
come
after,
if
we
switch
to
calculate
just
seems
where
it's
going.
I've
been
thinking
about
it
now,
because
so
far,
we've
just
totally
ignored
this
problem,
and
we've
been
relying
on
the
fact
that
we've
well
sort
of
just
try
to
sort
itself
out
with
default
and
I
forget
some
other
thing
without
without
sinking
it
up
like
we're,
not
setting
we're
not
setting
the
pod
Sider
anywhere.
Currently.
A
So
it's
already
going
to
be
a
breaking
change
in
some
respect
to
actually
start
doing
that,
because
then
it
should
I
believe
that
also
means
that
we're
not
masking
the
pod
traffic
right
now,
because
kubernetes
can't
tell
what's
pod
traffic
and
what's
not
because
we
hadn't
we
had
just
haven't
sent
a
signer
for
anything
and
keep
Adam,
doesn't
set
a
default
for
pod
cider
I.
C
Are
you
guys
thinking
about
switching
four
to
colic
was
that
it
default
because
of
ipv6
right?
Yes,
and
maybe
we
should
Hart
called
the
cedar
in
the
the
qadian
config
template
yeah.
A
But
so
a
couple
people
have
piggybacked
off
on
this
for
and
and
that's
what
this
pair
has
wound
up.
Turning
into
you
for
customizing,
the
C&I
I
think.
Maybe
we
should
go
in
a
different
direction
and
continue
to
say
it's
possible
to
override
it.
Maybe
we'll
add
the
phase
option
for
that,
and
instead
we
should
put
all
the
manifests.
We
need
into
the
image
and
not
expose
any
options
for
eating
for
like
building
with
a
different
one.
Natively
yeah.
C
B
A
A
G
A
That
will
be
more
convenient
and
that
way
you
don't
need
to
like
to
build
a
new
base
image.
Every
time
you
want
to
tweet
I
mean
we
could
even
we
could
even
write
these
at
Note
image.
Time
think
would
probably
be
another
good
option,
because
so
the
other
thing
that's
going
to
change
is
the
tarball
saving
I
meant
at
that.
Put
that
up
today
that
I'm
actually
looking
at
doing
that
at
Build
time
as
well,
so
that
we
can
make
the
loading
faster.
But
that
will
break
some
of
this.
A
It
should
be
more
reliable
that
way
as
well,
so
with
container
D,
which
I
guess
is
the
next
topic
in
here
with
container
D,
where
this
really
nice
thing,
where
the
snapshot
storage
is
very
cleanly
laid
out
and
identified
and
separate,
and
we
had
a
good
default
that
we
should
be
able
to
run
everywhere
and
overlay
I
believe
with
that
option.
What
we
can
do
is
for
new
images
that
are
built
to
run
with
can
well.
A
That
would
be
built
to
burn
with
container
D
we'd
have
the
option
to
load
all
of
the
images
into
the
snapshot?
Storage
at
Build
time
and
then
shutdown
container
D
and
persist
just
the
snapshot,
storage,
which
is
a
well
known
location
and
not
anything
else,
and
then
at
runtime.
We
don't
need
to
load
the
images
at
all.
Instead,
you
just
start
container
D.
A
We
don't
even
need
to
watch
for
a
container
T
to
start
and
we
can
move
on
to
running
q
batum
and
the
images
will
just
already
be
loaded,
because
we
already
have
the
snapshot
storage
in
the
image.
If
you
do
that,
then
we
avoid
a
lot
of
copying
around
these
big
kubernetes
image,
car
balls
and
that's
a
like
kind
of
a
major
blocker
to
dilute
speed
right
now,
particularly
on
worse
hardware.
A
The
downside
is,
you
do
have
to
say
that
it's
gonna
run
with
this
runtime
there's
another
thing,
because
they
have
different
storage,
all
the
theater
CNRS,
so
you
amortize
this
at
Build
time
and
for
many
nodes
as
well.
You
also
now
don't
have
to
run
this
step
across
all
the
nodes.
You
ran
it
when
you
created
the
image
and
I
look
into
it,
and
this
is
actually
how
some
of
the
more
advanced
VM
image
baking
is
done
as
well.
A
It's
just
more
efficient,
but
it
will
require
that
we
be
a
little
bit
more
clever,
handling
older
versus
newer
images,
and
it
would
not
enable
I
believe
your
suggestion
of
configuring,
which
one
I'm
not
sure
that
we
need
to
it
seems
like
using
docker.
Shim
is
kind
of
on
the
way
out.
It's
not
really
being
it
technically
works,
but
no
one
is
taking
up
actively.
Maintaining
it
and
production
to
planets
seem
to
have
moved
towards
CRI.
A
Yes,
but
they
would
need
to
do
so,
the
some
kind
of
hack,
where
you
did
something
like
mount
the
tarball
or
something
we
probably
stopped
doing
that.
Instead,
you
would
you
and
use
like
kind
load
image
that
should
still
function.
The
idea
is
that
the
the
load
step
that
we
run
ourselves,
we
should
be
able
to
pre
do
that.
A
C
A
Yeah,
no
I'm
not
saying
that
for
sure
we
move
for
this,
yet
I'm
still
working
on
that,
but
I
kind
of
stopped
working
on
it
for
a
bit
to
think
about.
The
problem
with
the
images
aren't
also
like,
if
we're,
okay,
being
a
penny
nated
enough
to
switch.
This
is
the
other
thing.
Is
that
means
that,
like
we
will
actually
switch
from
Docherty
to
a
container
T
and
we
might
even
like
want
to
switch
to
where
we
don't
even
run
docker
D
and
it
should
be
lighter
and
hopefully,
we're
stable
as
well.
A
But
it's
also
that
much
easier
to
debug
so
far,
but
then
we
may
have
similar
questions
in
the
future
around
like
cryo
like,
for
example,
I'm,
not
sure
if
cryo
supports
that
kind
of
behavior
but
I
guess
we
can
cross
that
path.
When
we
get
there,
I'm,
not
sure
that
we
even
really
want
to
own
that
well
being
super
configurable
on
every
axis
and
I.
Think
a
couple
of
these
things
like
scare
I,
are
mostly
better
tested
on,
like.
A
A
Yeah,
but
that
I'm
gonna
change
that
so
that
is
actually
really
to
a
different
problem.
The
problem
that
one
is
that
the
snapshot
needs
to
not
be
on
another
snapshot
er
and
right
now.
The
images
is
only
built
with
our
live
docker
as
a
volume.
If
we
do
VAR
Lib
or
we
do
var
Lib
container
D.
That
problem
goes
away
and
we
can
use
whichever
snapshot
we
choose
and
the
default
overlay
is
probably
the
one
we
want
to
use
and
I
believe
it's
just
in
the
kernel.
A
C
A
Mean
we
can
we
also
can't
support
multiple.
My
turn
is
that
we
keep
finding
that
people
depend
on
these
details,
so
I
think
it's
better
to
break
it
sooner,
because
people
reach
around
they
do
things
like
I'm,
gonna,
docker
login
on
the
node
and
I've
even
had
a
help,
a
few
people
with
this,
because
it
has
unblocked
actual
work.
But
then
it
means
that
they're,
depending
on
like
oh
I,
can
docker
login
on
the
node
and
that's
gonna
be
associate
with
kubernetes
I.
A
Think
a
bunch
of
these
things
like
CNI
and
CRI
they're
supposed
to
be
auxilary,
are
like
very
like
they're.
You
can't
hide
that
there's
no
way
to
make
that
a
non
leaky,
abstraction
the
fact
that
you're
using
a
certain
Cir,
a
certain
C
and
I
people
are
going
to
use
the
features
of
those
as
well.
Yeah.
C
A
The
other
hope
that
Tim's
I've
had
is
that
so
in
this
case,
we're
not
even
installing
container
d,
we're
just
installing
a
more
recent
docker
and
docker
ships,
all
of
that
and
they're
moving
towards
a
world
where
these
will
be
using
the
same
storage
and
everything
in
which
case,
even
if
you're
saying
oh,
I
just
installed
docker
and
I
stalled
kubernetes.
We
can
say
you
know,
use
this
airai
path.
A
We
can
there's
a
couple
drawbacks.
One
is
either
we
need
to
build
multiple,
different
images
with
different
ways
of
pre
loading
and
maintain
that
and
then
you're
going
to
need
to
select
an
image
that
matches
the
runtime
you
want,
even
though
in
theory
they
could
both
be
installed
or
we're
going
to
need
to
significantly
bloat
the
image
by
pre
loading
in
a
bowl
yeah.
C
A
I,
don't
know
if
that's
coming.
The
other
thing
is
that
just
like
it's
kind
of
bad
as
an
end
user
in
UX
too,
like
say,
oh,
this
config
option
only
works
with
this
image
and
we
need
to
start
like
tagging.
Different
branches,
I
think
it's
something
we
could
do
in
the
future.
I'm,
not
sure.
If
there's
actually
enough
demand
like
for
the
things
that
you
you
know
can
reasonably
do
with
kind.
E
Well,
like
we
distribute
the
standard
default
image,
which
is
what
we'd
call
like
a
multi
arch
sort
of
image
where,
like
that,
might
not
be
the
most
efficient
way
of
loading
images,
but
it
is
more
flexible
because
it
can
load
into
different
runtimes,
I,
guess
I,
maybe
I'm
misunderstanding
that
all
the
runtimes
read
in
or
like
a
one
of
the
doctor
files.
We
kind
of
they.
A
Can
when
it's
one
of
the
most
expensive
steps?
Currently
it's
part
of
why
booting
is
as
slow
as
it
is,
and
it's
not
quite
that
bad
on
big,
fast
hardware,
but
like
in
your
Travis
CI
setup
or
something
that
is
one
of
the
most
slow
painful
steps
we
can
probably
can
hopefully
see.
We
need
to
measure
more
improve
the
experience
quite
a
bit
by
not
eating
that
default
by
having
a
pretty
already
loaded
into
the
correct
format
on
disk,
because
that
the
Tarble
format
is
is
actually
an
archival
format.
A
That's
an
interesting
problem
depending
on
how
you
look
at
it.
It's
less.
It
should
be
larger,
but
it's
also
going
to
compress
again
and
either
way
when
you
boot
you're
going
like
that's.
The
first
thing
we
do
is
expand
to
that,
and
then
you
have
both
so
at
run
time.
It's
going
to
be
lower
at
download
time.
It
has
the
potential
to
be
worse.
It
depends
on
how
well
it
actually
compresses,
with
docker
hub
and
so
on,
and
you
can't
really
test
that
without
actually
pushing
an
image.
A
Yeah
so
I
would
like
to
but
I
wanted
to
discuss
it
a
bit.
First
and
I
also
wanted
to
look
at
the
lake
how
we
tagged
the
image
of
the
influencing
and
I
do
think
we
could
just
do
the
tarball
thing
and
continue
to
eat
the
cost,
but,
on
the
other
hand,
I
I
think
for
most
people.
This
stuff
shouldn't
matter
that
much,
but
the
cost
does
I
mean,
for
example,
for
the
kubernetes
project.
If
we're
gonna
run
multi
node
clusters,
it's
going
to
be
better,
faster
and
cheaper
for
us.
A
A
I
feel
like
it's
gonna
be
another
one
of
those
things
kind
of
like
Si
and
I
were
like
somebody
somewhere
wants
this,
but
not
very
many
people
and
I.
Don't
know
that
we
need
to
walk
on
that.
I'd
still
like
to
someday,
say
like
sure
you
can
run
literally
any
see
right,
but
I.
Imagine
a
few
like
Quetta
are
not
really
gonna
work
and
their
environment.
A
You
know
and
when
I'm
looking
at,
like
which
of
those
will
integrate
well
I,
think
the
thing
container
D
is
going
to
work
the
best
for
our
use
case
and
I
think
we
can
show
with
what
images
work,
that
it
should
be
a
reasonable
migration
path.
I
think
talking
to
signo
de
and
looking
at
everything
there.
Dr.
shim
is
on
a
fast
track
going
out
just
because,
while
the
plenty
of
end
users
are
using
it
now
there
that's
just
because
they
use
defaults
and.
A
As
far
as
like
maintainer
ship
is
going,
docker
is
also
not
really
expressing
all
that
much
interest
in
working
on
docker
shim
and
instead
they're
working
on
getting
to
a
world
where
docker
and
is
built
on
top
of
container
teat.
Actually
right
now,
I
think
it's
built
on
it.
But
it's
like
in
process
or
something
like
that.
I'm
trying
to
move
to
a
world
where,
like
it,
actually
runs
as
a
separate
process,
and
it
communicates
the
same
as
like
communities
would.
A
A
And
the
other,
and
just
yeah
things
like
the
debug
ability
they
because
they're
not
bound
by
you,
know
how
docker
has
been
a
number
of
things,
have
been
redesigned
and
are
much
much
easier
to
understand
and
work
with,
like
the
snapshotting
on
disk.
You
can
poke
around
with
that
on.
This
can
very
easily
understand.
What's
going
on,
it's
a
lot,
less
opaque
and
everything
is
actually
like
pluggable,
it's
documented
configurable
and
that
sort
of
thing
so.
C
A
B
A
C
A
Thought
yeah
everything
about
the
a
chain
set
up
is
brittle
and
relatively
untested
right
now
we're
not
really
leveraging
this
yet
pretty
recently
had
some
actual
major
bugs
in
it
I'm
sure,
there's
more
yes,
that
one
will
be,
but
that
one
said
that
one's
still
a
pretty
solve
a
problem.
That
will
actually
be
the
most
annoying
thing
to
do,
because
we
would
need
to
actually
like
run
a
container
ourselves
or
something,
but
it
it
shouldn't,
be
a
blocker
and
yet
pre-loading.
That
is
probably
something
we
want
to
start
doing.
Yeah.
A
I
there's
some
other
similar
stuff
like
Molly
on
was
looking
at.
Our
Duffy
was
looking
at
like
configuring
himself
or
something,
and
the
fact
that
we
use
H
a
proxy
is
also
not
necessarily
something
I
want
to
commit
to.
Yet
there
are
a
lot
of
reverse
proxies
that
they
all
have
same
features
and
we're
kind
of
using
it
in
this
simplest
mode.
We're
not
really
doing
anything
advanced
with
it.
A
C
A
A
I'm
also
looking
at
just
in
general,
even
I,
think
the
image
build
another
thing
I've
been
looking
at
and
considering
is
eliminating
the
docker
commit
stuff.
We
do
in
favour
of
some
build
since
we're
not
actually
doing
package
installs
for
the
most
part
anymore.
We
could
potentially
break
it
into
some
actual
normal
docker
build
type
stuff,
and
that
has
an
upside
that
cross-compilation
becomes
a
lot
more
viable.
A
If
we
like
pre,
build
the
base
image
for
various
architectures,
then
we
just
need
to
build
the
binaries
for
those
architectures
which
we
can
already
do
cross
compiling
and
then
just
like
copy
them
in,
and
that's
really
close
to
what
the
install
actually
does.
Today,
the
steps
were
we're
running
something
outside
of
the
base
image
and
pretty
superfluous.
Now,
except
the
app
install.
But
the
app
install
method
is
not
very
good.
Today.
C
C
A
Also,
what
I'd
like
to
do
eventually
with
that
is
get
the
build
from
karbala
stuff
working
and
combine
that
with
a
mode
where
we
don't
actually
need
to
run
the
container
to
build,
and
then
it
should
become
pretty
cheap
to
even
do
like
cross
built
images.
And
eventually
we
could
even
do
like
publish
manifest
list
images
or
something
like
that.
Yes,
so
like
just
steps
towards
that,
not
not
rushing
into
that.
C
A
A
That's
the
other
major
breaking
thing
that
I
think
we
should
fix.
Instead,
we
probably
want
to
bring
back
per
node
patches
and
we
want
to
actually
generate
the
file
on
every
node.
Instead
of
copying
it
from
them
and
with
every
node.
We
should
respect
the
kubernetes
version
and
then
you
can.
You
can
also
do
a
form
of
skew
testing
where
you
set
one
node
to
one
image
and
one
node
to
another
image
and
we
generate
the
appropriate
config
for
each.
A
C
A
C
A
C
So
you
can
also
implement
the
mechanic
for
batch
of
references.
The
way
this
works
is
like
imagine
at
the
bottom
of
the
config,
you
can
place
the
three
dashes
and
you
can
have
a
patch
that
is
real,
multiple
notes,
you
another
that
is
only
specific
to
one
of
them
and
with
references
you
can
do
this,
but
it
requires
parsing
diamo
in
a
way
that
this
supports
multiple
documents.
You
know
in
the
same
thing,
but
this
is
like.
A
Yeah
yeah
I
think
we
should
start
by
and
v1f
of
three
just
adding
per
node
patches
is
an
option.
It
will
be
a
little
bit
unwieldy,
but
it
won't
be
a
breaking
change
other
than
the
fact
that
we're
gonna
start
generating
the
file
on
each
node
and
then
like
for
what
we
went
off
of,
for
it
might
be
a
good
idea
to
use
references
instead.
C
A
A
A
little
bit
more
example,
because
you
can
at
least
see
that,
like
you
know,
this
field
is
being
set
and
this
level
of
the
struct.
When
you
look
at
a
patch
with
the
amal
anchors
when
you're
looking
at
the
piece,
that's
going
to
be
anchored,
you
have
no
idea
where
it's
supposed
to
go.
You
can't
tell
it's
not
possible,
you
have
to
find
all
the
references
to
it,
but
it
is
something
that
is
natively
supported
today.
C
A
A
The
fact
that
they
wound,
up
being
all
being
cluster
wide,
was
a
bit
weird,
especially
view
and
alpha
to
where
you
put
patches
on
a
node,
but
they
actually
got
applied
in
a
cluster
wide
config,
and
then
we
started
only
respecting
the
first
control
play
nodes
patches
and
not
reading
in
any
of
the
other
patches,
which
was
also
strange,
I'd
like
to
fix
that
behavior.
But
it's
not
clear
how
much
and
that's
a
breaking
change
or
if
we
even
care
since
this
is
an
alpha
and
that
never
behaved
well
to
begin
with,
I
mean.
A
Also
did
this
alpha:
3
is
a
little
bit
better,
but
part
of
that
is
that
we
just
don't
have
note
level
patching
yeah.
A
So
the
so
well,
yes,
I
mean
you're
just
generating
it
down
the
same
way,
but
the
the
one
part
that
just
gonna
change
per
node
is
the
the
nodes
information,
the
fact
they're
like
okay.
This
is
a
kubernetes
v1x
one,
but
again
right
now,
that's
just
something
that
like,
if
you
have
a
config
right
now
that
sets
images
from
different
kubernetes
versions.
It
probably
just
doesn't
even
work.
I,
don't
think
anyone
has
this.
So
if
we
break
that
behavior
I,
don't
think
it's
a
real
concern
and
we
are
alpha
yeah
and.
A
C
A
A
But
so
what
I'm
also
expecting
with
this
is
that
probably
people
don't
actually
need
to
patch
that
often,
unless
you're,
really
like
kind
of
tweaking
kind
or
something
like
it's
a
it's
already,
a
very
power
user
feature.
But
what
will
change
is
the
fact
that
the
base
generated
file
is
going
to
become
better.
C
C
A
I
will
punt
this
until
next
time
since
we're
low,
but
I
think
we
might
need
to
consider
having
some
other
cluster
wide
options
for
that
that
we
use
for
our
own
generation
without
actually
templating
or
anything.
Just
saying
that,
like
you
know,
if
you
set
this
verbosity
field,
then
in
the
base
config
file,
we're
gonna
go
through
to
every
component
and
set
that
verbosity
level
or
something
like
that,
because
a
few
of
those
things
are
a
bit
overly
painful
today.
A
Feature
gates
are
another
one,
so
that
gets
kind
of
weird
that
gets
in
the
territory
of
like
I.
Don't
know,
maybe
we're
trying
to
make
Q
Batum
config
more
usable
or
something
like
that,
but
it
is
still
a
real
problem
and
if
I'm,
a
user
of
kind
I,
probably
don't
even
really
want
to
touch
the
configs
that
much
but
I
do
want
to
do.
Something
like
I
need
feature
gates
on
to
test
and
it
might
be
worth
making
a
higher
level
option
for
that.
I.
C
A
A
So
I've
been
trying
to
think
about
like
which
of
those
would
make
sense
and
if
it
even
makes
sense
at
all,
because
I
mean
it
is
possible
to
do
this
with
cumin
and
config
in
with
the
patches
today.
It's
just
ideally
for
common
use
cases.
You
shouldn't
need
to
touch
the
patches
they're
more
of
a
like.
We
haven't
implemented
an
option
for
this.
Yet
if
you
want
to
try,
you
can
use
this
to
try
stuff
they're,
also
useful
for
developing
kind,
but
I
think
if
users
are
having.
C
C
A
C
A
C
C
Yeah,
so
what
I
started
to
like
doing
incubator,
convincing
people
so
that
the
right
thing
to
do
is
every
cycle
of
Covidien
I?
Try
to
define
the
items
we
can
tackle.
You
know
I,
we
don't
we
don't
that
stuff,
that
we
cannot
have.
No.
What
is
the
most
important
and
kick
everything
down
to
the
next
room?
Basically,
yeah.
A
C
We
also
started
working
on
config
map
walks
this
face.
If
we
can
hopefully
want
it,
both
the
control
panel
yeah
I've
been
following
that
that's
that's
good
yeah
one
quick
question
is
kind
of
related
to
kind
of
do
you
know?
What's
happened
to
you
know
kubernetes
in
20
like
did
we
manage
to
reduce
time.
B
C
A
I'm
also
not
active
on
that
front
right
now,
trying
to
keep
up
with
kind,
and
everything
else
has
been
a
bit
much
and
I.
So
what
some
other
frustrating
things
going
on
would
not
I've
stepped
back
from
pestering
for
a
little
bit
for
the
moment
and
I'm,
mostly
up-to-date
on
that,
because
Katherine
finally
stepped
in
to
fix
it
and
Katherine
sits
across
from
me.
So
up.
A
Yep
thanks
everyone,
sorry
that
was
such
a
like
meandering
I,
did
not
have
time
to
actually
put
together
any
kind
of
agenda.
We'll
try
to
continue
to
improve
the
organization
of
this
project
and
Antonio
I'll.
Try
to
put
more
time
into
looking
at
the
ipv6
stuff.
I
think
we
should
look
towards
the
multi
manifest
thing
and
I
will
today
look
at
how
we
can
start
marking
the
images
with
information
about
what
they'd
support
Oh.