►
From YouTube: sigs.k8s.io/kind 2019-01-28
A
B
A
C
D
D
A
C
Okay,
so
I'm
I'm
I'm,
going
to
invite
you.
C
We
were
looking
at
some
discussion
recently
that
I
need
to
give
back
to
about
the
load
balancers
and
configuring
that
better
and
I
know
Amir
was
working
on
some
Q
test
integration,
so
we
can
actually
use
this
thing
in
pre,
submit
and
I
over
the
weekend.
I
pushed
through
a
patch
so
that
we
can
test
with
multi
node
actually
in
CI,
so
that
we're
not
failing
the
conformance
tests.
C
So
when
you
tell
a
provider
it
will
use
provider
magic
to
find
some
things,
I'm,
not
sure
who
finds
that
one,
but
otherwise
it
doesn't
seem
to
auto,
detect
like
anything
and
that
flag,
defaults
to
I,
think
minus
one
and
the
test
is
looking
for.
There
must
be
at
least
two
nodes,
so
we're
both
running
multi,
node
and
actually
setting
that
flag.
Currently.
D
C
The
other
thing
we
should
follow
up
on
upstream
is
I
believe
that's
looking
for
the
number
of
worker
nodes,
which
wasn't
actually
clear
so
kinda
will
be
useful
for
this,
but
I
think
you
know
it'll
be
easier
for
everyone.
If
we
make
these
things
more
obvious,
we
also
we
do
have
a
number
of
things
in
the
backlog
as
well
as
liked
NPR's,
open,
I,
know.
George
has
been
doing
some
awesome
stuff
to
actually
get
us
a
known
issue,
guide,
I'm,
hoping
to
work
with
him
and
somebody
else
similar
to
get
a
dockside
up
this
week.
C
B
C
I
think
the
the
biggest
dragon
is
that
we
have
to
do
a
lot
of
actual
installation.
Normally,
if
you're
gonna
make
an
arm
image,
you
try
to
stick
to
like
I
cross-compile,
a
binary,
forearm
and
I
copy
it
onto
an
existing
image.
But
we,
you
know,
we
need
to
install
all
of
the
things
that
a
Cooper
Nettie's
needs
besides
kubernetes
itself
and
they're,
like
that's
our
own
image,
so
we
would
also.
C
We
will
also
need
to
build
that
from
arm,
because
it's
actually
running
a
lot
of
code,
so
I
think
the
the
way
to
go
there
is
to
request
resources
from
I
think
it
was
open,
lab
and
set
up
some
kind
of
integration
in
prowl,
where
we
have
a
periodic
or
a
post,
submit
job.
That
goes
and
builds
those
images
for
arm
and
then
we'll
have
to
look
at
I.
Think
it
probably
initially.
The
easiest
thing
to
do
is
to
just
tag
those
as
special
our
manages
until
we
know
that
that
works.
C
D
C
C
That's
the
biggest
issue
is
that
we'd
be
asking
someone
to
like
keep
this
thing
running,
which
is
where
I'm
leaning
towards?
Maybe
we
just
have
a
machine
that
we
SSH
to
from
a
proud
job
and
run
the
build.
That
might
be
a
lot
more
manageable,
because,
if
we're
not
really
depending
on
that
box,
much
and
we're
not
like
trying
to
ourselves
become
a
kubernetes
provider
or
something
for
far
the
main
cluster.
We
use
gke
and
let
some
magical
Google
s
or
use
somewhere
that
manage
that
for
us.
Hopefully,.
C
C
C
E
C
C
But
getting
in
a
maintainable
long-term
is
definitely
gonna
be
interesting.
Far
for
the
Linux
amd64
stuff,
we
should
be
able
to
set
up
some
builds
on
the
trusted
cluster
and
just
Auto
push
images
on
criminal
use
releases
and
get
that
mostly
maintaining
itself,
but
I'm,
not
sure
how
well
we
can
do
that
on
arm.
Just
because,
like
maintaining
arm
cluster
itself
is
gonna,
be
a
lot
of
work.
C
C
C
C
C
I'm
actually
sort
of
in
towards
it
might
make
sense,
suggest,
stick
to
configuration
given
how
small
and
simple
our
config
is.
I,
don't
know
if
it's
a
massive
ask
to
ask
someone
to
like
put
a
snippet
to
write
out
the
config
first,
but
I
know
myself
when
I'm
using
a
tool.
If
I
can
just
toss
a
flag
on
that's
the
first
thing.
I'm
gonna
do
to
try
it
and
then
I'm
going
to
come
back
and
actually
use
configurator
I.
D
D
C
D
C
So,
to
be
fair,
I'd
already
been
discussing,
possibly
making
that
load,
balancer,
implicit
I,
don't
think
it's
super
great
UX
that
if
you
want
a
multiple
control,
plane
cluster,
you
also
need
to
remember
to
add
a
load
balancer
that
you
can't
configure
in
any
way-
and
you
know
if
we
do
later
add
configuration
for
the
load
balancer.
We
can
have
a
field
for
configuring.
The
load
balancer.
G
If
it
is
something,
I
think
that
first
of
all,
those
are
two
separate
constants,
you
know
they
you
the
UX
and
then
the
Flex
is
something
that
we
can
design
in
order
to
make
city
football.
This
is
why
I
implemented
APR
use
a
to
a
dolly
to
flex
and
to
make
the
load
balancer
implicit,
because
I
see
flags
as
something
that
is
for
the
beginners,
so
the
user
that
don't
want
to
think
about
it.
G
But
it
the
fact
that
in
the
CRI
there
is
to
flag
and
and
this
flag,
it
makes
the
load
balancer
implicit
does
not
make
or
does
not
imply
that
you
have
to
do
the
same
in
the
config.
You
can
leave
the
config
free
for
the
expert
to
user
to
make
their
own
cluster,
for
instance,
in
the
configure.
I
might
be
that
I
want
to
add
a
load
balancer
in
front
of
a
single
node
of
a
single
master,
because
then
I
want
to
grow
at
the
number
of
my
masters.
G
D
C
That's
what
I'm
wondering
like
is
like
so
the
other
route,
so
you
can
go
about
this,
aren't
just
you.
If
those
flags
are
specified,
the
country
flag
is
banned,
and
if
you
detect
it,
you
error
or
you
can
just.
We
could
just
not
have
those
flags
and
just
say
only
config
its
version.
If
you
want
anything
besides
the
name,
you
need
to
any
anything.
That
is
not
just
a
simple
per
instance
thing.
You
need
to
use
the
config
I.
G
Think
that,
okay,
that
the
problem
will
be
fixed
said
in
chief
Samar,
we
live
the
flags
that
map
exactly
on
the
config
and
we
define
some
sort
of
procedure
precedence
or
they
called
the
flex
and
rules,
for
instance,
a
flag
that
must
have
arrived.
They
take
precedence
on
the
config
and
how
considering
out
the
config
the
developer,
changing
the
in
kind
I
see
that
now,
basically,
we
are
using
configured
an
auto
configuration
because
there
are
no
more
books
and
there
is
only
they.
C
G
Okay,
so
these
blocks
my
idea
that
at
the
end,
the
company
will
be
really
simplified,
whoever
number
of
replicas
for
additional
for
each
role
but
okay.
So
so
so
we
are
in
a
state
where
the
flags
will
never
match
the
config,
because
the
flags
are
an
abstraction
or
an
oversimplification.
Over
the
cone
see
that's.
C
Almost
a
fair
I
think:
we've
we've
had
a
number
of
features
like
that
requested
and
I.
Think
it's
only
going
to
expand
over
time
that
people
want
to
treat
piece
more
like
they
would
a
normal
node
and
want
to
be
able
to.
You
know,
set
things
on
them,
I
mean
so
something
else
we're
gonna
want
at
some
point
for
testing
purposes
is
the
ability
to
set
different
nodes
to
different
images,
so
we
can
run
a
mix,
kubernetes
version
cluster.
C
D
C
So
what
I'm
thinking
is
that,
like,
depending
on
what
we
wind
up,
adding
the
well
bouncer
most
people
are
not
gonna
want
to
touch
anything
about
the
load
balancer
at
all
it
just
it
like.
It
exists.
If
you
have
multiple
control
plane
notes,
which
a
lot
of
people
are
also
not
going
to
do,
because
that
really
only
makes
sense
if
you're
testing,
like
st.
you
made
them
if
you're
doing
anything
else,
would
kind
like
the
cluster
api
using
or
something
you're
likely
sticking
to
a
single
node.
C
D
So,
in
my
opinion,
the
yeah-
it's
not
too
much
work
for
the
users
to
always
that
it's
a
it's
a
couple
of
lights,
I
think
to
add
the
load
balancer
and
if
somebody
like
a
request,
eventually
the
min
configuration
for
the
load
balancer
for
each
a
proxy.
It's
going
to
be
a
very
easy
for
us
to
just
expose
it
right.
C
So
we
can,
we
can
still
expose
it,
but
instead
of
putting
it
in
the
notes
field,
we
can
have
like
a
like
a
load,
balancer
field
or
something
I
think.
The
only
reason
it
would
make
sense
to
have
like
a
load
balancer
in
the
nose
is
maybe,
if
you're
going
to
allow
to
have
multiple
load
balancers
doing
different
things.
But
even
then
it
should
probably
be
a
different
field,
because
the
format
of
what
you
can
fit
and
configure
is
different
than
it
and
then
a
tributaries
node.
G
D
So
this
is
a
topic
I
wanted
to
discuss.
I
mean
it
is
really
scary
to
move
the
config
into
beter.
That's
early,
that's
my
experience
from
QbD
embassy.
We
basically
worry
now
for
couple
of
years
and
I
think
that
like
unless
there
are
objections,
I
mean
we
should.
We
should
keep
the
configure
now
for
as
much
as
possible,
and
this
allows
us
to
be
very
flexible
in
the
changes
we
make.
We
can
pretty
much
nuke
and
entire
structures
out
of
the
config.
Nobody
is
going
to
mind
because
I.
I
D
C
My
point
is
that
I
I
think
it's
important
that
people
get
a
response
from
us
that,
like
we
are
supporting
this
and
that
we
are
not
going
to
break
them
and
that
we're
able
to
like
quickly
release
other
changes
because
we're
going
to
need
to
release
often
to
fix
small
issues
with
kubernetes
like,
for
example,
when
we
have
that
DNS
issue.
And
if
we
keep
changing
the
Alpha
format
and
people
can't
safely
upgrade
easily
I.
D
C
I,
like
I'd,
really
like
us
to
get
to
a
point
very
soon,
where
I
can
upgrade
kind
without
thinking
about
it,
I
shouldn't
need
to
think
about
it,
and
that
should
let
you
live
against
kubernetes
master
branch.
You
should
be
able
to
test
that,
but
in
order
to
test
that
safely,
you're
going
to
need
to
keep
upgrading
kind
periodically
because
we're
going
to
need
to
put
small
workarounds
in
until
maybe
perhaps
someday
turbinates
decides.
D
D
C
G
Three
three
things
that
Atari
in
my
pipe
and
and
to
to
propose
for
kinder
one
is
is:
is
just
to
are
for
improving.
Could
the
Google
mean
code
coverage
and
test
day?
One
is
add
another
type
of
node,
which
is
the
externalities
TD,
which
is
a
scenarios.
It
will
be
basically
seeming
very
similar
to
the
load.
Balancer
I
will
spin
up
an
order
and
run
on
an
eighty
CD
on
this,
the
in
docker
on
this
node-
and
this
is
a
discover.
A
set
of
the
this
will
be
prover.
G
G
C
So
actually
I
think
I'd
be
having
a
command
to
load.
Docker
image
is
something
I've
thought
about
for
a
while
and
I'm
thinking.
Hopefully,
this
week,
OPR
I
think
we
can
support
two
kubernetes
versions
without
actually
supporting
it
by
just
having
better
tooling
for
loading
stuff
into
the
into
the
containers
yeah.
So
we
have
laid
a
path
if
we
have
a
way
to
patch
in
another
image
that
you
want
to
load
and
then
you
can
just
use
like
CP
to
add
binaries
to
it,
we
can
have
a
small
tool
outside
of
kind
run.
C
Like
that's
a
good
thing,
and
it's
also
something
that
I
wouldn't
want.
Other
users
using
I
want
to
I.
Think
it'd
be
great
for,
like
you,
Batum
testing,
to
use
it
but
I
I,
don't
think
that's
something
that
we
should
like
advertisers
look.
You
can
run
upgrades
because
no
one
should
do
that.
It's
a
hat!
Think
no.
E
E
So
perhaps
if
we
can
provide
like
the
what
we
want
to
ship
to
users
as
a
kind
CLI
and
then
actually
provide
packages
or
like
where
you
can
go
and
get
a
reference
to
a
cluster
and
then
say
no
cluster
load
image
or
cluster
dot,
exec
and
easily
execute
commands
in
kind
of
a
like,
instead
of
just
bash
hacks
with
a
docker
exec
or
something
like
that,
we
can
actually
then
get
into
the
containers.
Because
then
you
could
write
your
own
test
framework
that
goes
that
actually
reliably
execute.
These
commands.
I.
G
H
C
Actually
need
the
internal
actions
framework
to
do
that.
If
you
have
the
ability
to
run
commands
on
nodes-
and
you
have
access
to
the
cluster
information,
I
want
to
define
my
own
action
right,
but
you
could
I
mean
there's
not
a
lot
of
code
in
the
action
framer
you
can.
You
can
write
something
outside
of
kind
that
runs
I.
C
C
G
C
We
should
even
consider
maybe
like
having
a
second
binary,
that
we
provide,
potentially
once
we
have
this
site
matured
out,
but
just
to
keep
separate
that
like
if
you
want
to
use
kind
for
general
stuff.
You
should
use
what's
going
on
in
this.
If
you
want
to
do
super
advanced
cube,
a
them
testing
stuff
and
it's
potentially
less
stable,
because
we're
doing
slightly
more
crazy
things
like
upgrading
the
nodes,
we
haven't
it's
open
source
somewhere,
but
it's
not
like
the
main
binary.
D
C
Mean
so
we
we,
we
have
some
right
now
and
what
James
talked
about
is
having
as
a
package
I'm
wondering
if
some
kind
of
like
queue,
Batum
CIE
testing
infrastructure
that
is
built
around
kind
can
be.
You
know,
unkind,
kinder
whatever,
and
whether
or
not
that
lives
in
the
kind
of
evil
is
probably
not
super
important.
C
It
might
even
be
better
for
for
CI
I
would
actually
argue
it
would
be
better
to
live
somewhere
else
and
vendor
kind,
because
in
the
same
respect
and
cube
test,
where
we're
consuming
going
to
start
consuming
kind
for
CI
we're
trying
to
move
away
from.
Actually
you
know
check
I
know
you
worked
on
this
train,
move
away
from
checking
out
upstream
and
running
it
and
instead
of
fetching
the
stable
binary,
so
that
CI
doesn't
get
surprised
broken
while
we're
developing
kind.
C
I
would
say
the
same
thing
if
we're
gonna
use
this
for
genome
testing,
and
it
might
take
some
more
time
to
think
out.
Maybe
we
should
put
unkind
or
whatever
it's
going
to
be
somewhere
else.
An
inventor
kind
as
a
library
and
I'll
also
give
us
another
opportunity
to
first
hand
think
about
how
we
improve
kind
as
a
library
that
is
not
just
consumed
by
the
main
command
line
right
now.
C
C
I,
do
really
think
that,
like
we
talked
about,
you
know
kind
of
main
tool
being
fairly
simple.
You
don't
think
it
should
take
us
much
longer
than
another
month
to
sort
out
a
you
know,
a
couple,
more
small
changes
and
then
just
stop
breaking
people
stop
telling
them
that.
Okay,
you
can't
do
this
thing
config
more
like
we
ripped
out
the
hooks,
but
that
was
because
that
was
coupled
to
the
internal
life
cycle,
which
is
you
know
something
that
I
think
we
will
need
to
change
and
it
can
be
accomplished
by
other
means.
C
Similarly,
with
like
actions,
I
think
we're
going
to
need
to
refactor
that,
but
for
most
use
cases
they
don't
need
that
and
for
the
other
use
cases,
I
think
we
can
solve
it
without
without
exposing
that
part,
so
that
we
don't
have
to
make
it
stable
yet,
but
for
like
basic
configure
I
want
some
nodes,
I,
don't
see
any
reason.
There
needs
to
be
a
super,
prolonged
process
to
make
that
stable.
D
Yeah,
the
main
problem
here
is
a
fragment
fragmentation
of
efforts,
because
I
write
from
my
experience
in
open-source.
This
never
ends
as
well,
because
at
some
point
this
kind
testing
infrastructure
is
going
to
want
to
make
a
change
upstream,
where
it's
not
gonna,
be
like
it's
going
to
become
out
of
scope
for
upstream
at
this
point,
and
it
becomes
a
fork
essentially
but
yeah.
We
can
do.
C
It
I
think
we
can
I
mean
so
we're
also
we've
been
asking
some
other
people
to
do
some
of
this,
like
the
SUSE
guys
I
know
they
wanted
to
to
test
their
special
set
up
this
way,
but
the
changes
that
they
initially
want
to
make
upstream
didn't
look
the
most
maintainable
for
us.
I
I
think
we're
going
to
be
able
to
make
that
work
without
forking,
though
eventually.
C
Particularly
from
the
library
use
case,
I
think
it's
mostly
there.
Today,
the
the
node
should
have
a
pretty
good
API
in
mister
has
a
decent
API
and
those
are
fairly
stable,
but
we
do
need
to
get
around
to
marking
some
of
those
as
well,
and
then
that
does
need
a
quick
refactor
to
in
particular.
The
create
call
takes
up
too
many
arguments,
and
it
really
needs
to
take
some
kind
of
defaulted.
Other
sub
context
thing
as
well.
Similar
the
the
pod
bans,
support,
I,
think
the
same
way.
C
We
probably
need
to
pump
through
some
kind
of
context
with
options.
The
way
we
do
build
and
instead
of
having
some
like
global
and
field
that's
set
by
scanning
environment,
but
I
think
we
can
do
all
these
things
without
without
forking
and
I.
Think
we'll
have
a
better
idea
of
what
people
need
upstream.
If
we
do
a
little
bit
of
that
ourselves
right
now,
there
are
definitely
things
that
are
very
coupled
between
the
library
and
the
command
line,
like
the
create
call
is
definitely
one
of
them.
G
C
D
Cmd
scripts
inside
the
cube,
ATM
repo
pull
the
cube
ATM
repo
I
run
around
the
test,
CMD
scripts
using
the
deployer
that
is
already
in
cutest.
That
was
my
plan
and
I'm
still
probably
going
to
continue
with
this
plan,
regardless
of
the
fact.
If
we
like
split
in
mankind
and
kind
and
stuff
like
that,
so.
I
G
C
G
I
C
D
About
we
create
different
layers
in
kind.
You,
like
the
proposal,
form
from
ratio
initially
like
you,
can
you
can
pretty
much
hide
a
bunch
of
flags,
and
even
the
config
can
be
abstract
in
such
a
way
that
you
have
a
basic
scenario
and
advance.
It
scenario
and
advanced
scenario
is
not
gonna
be
recommended
to
users
at
all.
If.
C
E
Mean
I
put
it
somewhere
else.
I
see
unkindness
more
of
like
the
glue
that
you
write
like
for
your
project,
to
work
with
Clank
better
and
to
do
what
you
need
to
do
better,
which
in
this
case
it's
kind
of
tightly
related,
because
it's
all
Cuba
Nettie's,
but
for
me
on,
say,
cert
manager.
If
I
need
to
start
doing
something,
weird
or
hacky,
then
you
know
building
that
small
little
go
binary
around
it
or
what
I
guess
will
eventually
grow.
E
C
E
Unkind
from
what
I
understand
sounds
like
it'd,
be
maybe
I,
don't
wanna
use
the
word
for
a
replacement
for
cube
test,
but
it
kind
of
sits
in
a
similar
sort
of
category
I
would
imagine-
and
it
might
be,
that
cube
test
turns
out
exacting
out
and
executing
unkind,
which
then
goes
and
runs
a
sweet,
but
it
seems
to
me
like
a
wrapper
for
test
suite.
I,
don't
know
yeah.
C
G
Think
I'm
I
think
that
the
idea
of
having
a
separate
tool,
at
least
at
the
beginning,
while
they
used
key,
is
a
shape
out,
make
sense.
I
was
about
the
issue
and
then
we
decide
where
the
line
is
and
then
in
the
meantime,
I
talk
with
team
about
the
repo
and
and
we
we
decide
the
one
when
as
soon
as
we
have
some
more
info,
I
mean.
C
We
might
not
even
need
one,
maybe
just
a
different
command
and
just
a
big
scary
warning,
and
it's
like
hey
this
is
our
CI
goop.
You
probably
don't
want
this.
We
could
even
hide
it
in
like
the
hack
directory
or
something
say
this.
This
is
where
the
CI
goop
lives
leave
it
alone
and
yeah.
If
we
get
it
to
a
point
where
it's
really
stable,
then
maybe
we
should
also
move
some
of
the
features
over
into
the
main
binary.
So.
E
I
mean
if
we
continue
to
add
these
features
in
to
the
kind
of
the
library
portion
of
kind,
then
I
may
be
unkind,
becomes
like
the
experimental
edge
release,
cuz,
really
to
actually
enable
that
in
kind
you're,
just
porting
over
that's
the
CLI
code,
as
opposed
to
actually
the
main
in
implementation.
Everything
else
should
will
be
the
same
and
we
should
get
cherry-pick
across
different
features
as
and
went,
and
if
there's
a
need
to
kind
of
do
that
I
suppose.
But.
C
G
Doesn't
work
so
I
guess
know
if.
C
They
did
if
it
would
be
really
nice
if
they
would
help
fix
that,
but.
C
D
C
C
C
Thought
it'd
be
really
cool
someday.
We
should
keep
that
code
around,
but
I
think
we
should
avoid
doing
that
right
now,
because
you
know
it's
more
complexity
and
if
you
look
at
everything
we're
doing
testing
from
we're
trying
really
hard
to
not
have
to
use
the
github
API
more
than
we
need
to,
because
you
know
rate
limiting
and
other
flakes
and
inconsistencies.
D
G
C
The
short
answer
of
that
is
we
do
that,
where
possible-
and
there
is
some
tooling-
that,
for
example,
just
like
downloading
a
building
running
that
for
pre
submits
it's
a
little
trickier
because
of
like
the
the
UX
of
that
we
are
eventually
hopefully
going
to
use
caƱedo
pipelines
to
do
this.
The
meanwhile
we
have
like
a
half-baked
thing
and
we're
backing
off
from
that,
and
instead
what
we
do
is
we
build
with
basil
and
we
use
a
remote
cache,
so
it's
not
actually
building
almost
all
of
it.
C
C
Eventually,
we
would
like
to
get
back
to
not
building
multiple
times,
but
things
like
how
do
you
tell
the
CI
to
retest
and
stuff
need
more
thought
and
for
we
have
this
run
after
your
success
thing
right
now,
where
you
can
change
jobs
to
do
that,
but
it's
not
well
thought
out,
so
we're
currently
in
the
process
of
removing
it.
While
we
rethink
that,
oh.