►
From YouTube: sigs.k8s.io/kind 2019-02-25
A
A
B
A
A
A
So
context
is
say:
cluster
lifecycle
is
setting
up
more
jobs
with
kind
and
there's
some
discussion
about
possibly
release
blocking
and
possibly
not
queue.
Batum
urban
or
excuse
me
kubernetes
anywhere
being
released,
blocking
and
there's
some
concern.
Sir,
and
the
sig
release
meeting
will
release
team
meeting,
which
is
just
before
this
meeting
yeah.
C
C
C
A
C
C
A
A
Yeah
I,
so
I'm
not
super
worried
about
that.
Though.
I
think
that,
like
a
lot
of
things
that
are
really
interesting
in
the
cloud
provider,
path
are
storage
in
particular,
and
so
we
can
do
some
more
to
fake
that
out.
But
having
talked
us
on
this
urge
people
I
don't
think
we
can
fully
do
that,
because
there's
there's
paths
where
you
handoff,
like
having
things
mount
like
disks
mounted
and
things
and
I,
really
think
we
should
still
use
like
like
cloud
viens
or
bare
metal
clusters
for
that
sort
of
thing
and
be
testing
providers.
A
A
So
I'm
not
super
concerned
about
that
right
as
long
as
they
discovered
it
technically,
kubernetes
anywhere
I
think
had
some
limited
support
for
other
platforms,
but
we
weren't
running
that
and
all
of
the
patches
that
I've
seen
going
into
keep
the
CI
running
have
just
gone
to
the
GC
one,
because
that's
what
we're
actually
running
at
this
point.
Almost
all
the
activity
is
basically
just
lube
Amir
trying
to
keep
the
CI
on
its
last
legs.
D
A
C
Yeah,
it's
that
thing
that,
like
so
I
agree
with
that
my
plan
is
not
to
immediately
flip
the
switch
and
suddenly
change
things
but
to
actually
because
I
don't
want
to
use
this
as
precedent
for
hey.
Can
we
add
or
drop
job
just
because
we
wanna
play
thanks,
so
there
should
be
some
level
of
hysteresis
involved.
I
could
imagine
maybe
reasonable
compromise
is
to
add
both
of
the
Japanese
to
move
the
existing
Canadian
job
over
to
release
in
forming
which
isn't
a
hard
block
for
us.
C
But
just
the
thing
we
are
supposed
to
be
aware
of
and
to
put
kind
in
that
same
place
and
to
ask
for
require
some
amount
of
data
that
shows
that
over
time
we're
getting
good,
reliable
signal.
We
document
that
we
know
what
coverage
were
giving
up
and
what
coverage
we're
getting
we're
clear
on
who's,
supporting
it
and
then
move
it
forward
and
in.
A
Fact,
on
that
note,
I
actually
know
there
is
some
nonzero
amount
of
flake.
Some
of
that
is
super.
Rarely
we
have
like
a/c
groups
issue
coming
up.
The
rest
of
it
is
I.
Think
some
of
the
tests,
the
conformance
tests
when
they
run
in
parallel,
become
ever
so
slightly
flakier
yeah,
so
like
I'm
gonna
expect
this
to
not
be
100%
green
signal.
It's
gonna
be
one
of
the
greener
signals
we
have
hopefully,
but
we
should
really
like
record
that
and
see
it
first.
Do
we
have
a
do?
C
D
D
D
Yeah
sorry
I
was
late,
hey
hey
Lobo
me
just
to
catch
you
up,
so
what
I
was
suggesting
in
the
previous
call
was
that
it's
running
right
now
and
it
seems
to
be
doing
its
thing
right
this
minute.
So
until
it
starts
turning
red
leave
it
alone
and
when
it
starts
turning
red,
then
we
do
not
try
to
do
what
we
did
before,
which
is
to
try
to
fix
it.
We'll
just
move
it
out
at
that
point.
E
Artists,
so
I
say
Sena
fix
it
during
the
weekend.
The
problem
is
that
communities
everywhere
breaks
every
week
at
this
point
and
I
have
to
say
about
testing
for
this
part
of
the
problem
as
well,
because
the
integration
with
the
interim
testing
framework
and
the
deployer
and
it's
super
entangled
I
mean
if
you
have
the
argument
that
we're
testing
this
GCE
infrastructure
with
kubernetes.
Anyone
like
a
backup
signal,
I
understand
this
argument,
but
the
ladies
area,
where
I,
don't
think
that's
the
solution
for
this.
It's
just
super
super
unstable
yeah,.
A
I
wanna
know
we
I'm
pretty
actively
working
on
trying
to
disentangle
those
and
hoping
to
start
piloting
that
one
some
of
the
kind
jobs
maybe
later
this
week,
we're
doing
a
full-on
rewrite
of
cube
test
and
the
image
that
goes
with
it.
That
has
also
gotten
in
point
where
it's
difficult
to
maintain,
there's
also
a
separate
problem
that
we
need
to
disc
sorry
visit.
There's
a
sir
problem.
A
You
need
to
discuss,
buy
enough
another
channel
where
you
need
to
pass
in
the
number
of
worker
nodes
to
the
tests
right
now,
and
because
we
have
things
like
income,
you
do
esh.
We
don't
really
notice
that
NCI,
but
this
is
going
to
become
a
problem
for
standing
up
other
CI,
because
there's
this
in
it
flag
that,
instead
of
being
detected
from
the
cluster,
you
need
to
invoke
the
tests
in
a
certain
way
or
they
fail.
E
C
C
The
switch
and
use
that
as
precedent
to
flip
the
switch
for
everybody,
what
I
think
I
would
rather
do,
is
move
this
job
off
into
release
informing
and
take
a
look
at
what
kind
of
precedent
we
want
to
set
for
history.
That
proves
viable
signal
for
kind.
My
other
question
is:
what
should
we
do
about
the
cuvette
DM
jobs
that
used
kubernetes
anywhere
across
all
of
our
release?
Branches
because,
for
example,
if
I
look
at
the
release,
113
blocking
dashboard,
the
cuvette
DM
jobs,
they're,
seeing
green.
E
Same
pattern
and
follow
the
older
branches
as
well:
okay,
I
mean
for
consistency.
The
fact
that
master
is
broken.
Currently
it's
because
the
KK
master
is
broken,
but
if
we
make
a
change
in
testing,
for
this
might
also
break
the
older
branches
that
are
tested
using
the
deployer,
because
testing
for
is
independent
of
for
KK,
okay
and
I've.
Seen
this
happen
many
times
I
have
to.
You
have
to
fix
this
simpler
to
fix
the
deployer.
G
C
I,
like
again
I
think
you
know
you
are
you
consider
you
a
coop,
ADM
maintainer
I,
hear
from
the
QB
diem
project
sub
project
that
they
don't
want
this
to
be
their
supported
way
of
saying
that
convey
diem
is
exercised
continuously
in
CI
you'd,
rather
have
something
else,
so
I
get
that
if
the
kubernetes
anywhere
folks
feel
like
they
would
like
their
stuff
to
be
continuously
exercised
in
CI
I.
Welcome
them
to
come
and
talk
to
us
and
propose
a
way
of
keeping
it
maintained.
No.
C
E
C
A
Well,
basically,
what
I'm
hoping
to
do
here
is
get
some
of
the
breaking
changes
out
of
the
way
and
clarify
some
things
like
our
top-level
configuration
object,
is
config
config
right
now,
calling
it
cluster
is
a
little
bit
clearer.
Fabrizia
already
mentioned
this
previously
in
another
issue
which
I
missed
I.
Don't
think
that
one
has
any
concerns,
but
there
are
a
number
of
other
things.
E
A
So
I
we
can't
keep
it.
My
concern
is
that
basically
replicas
becomes
near
useless
once
you
said
anything
more
than
the
role
or
maybe
image,
which
is
at
most
two
lines
of
config
for
each
node.
The
other
things
like
the
like
the
host
mount
has,
or
possibly
ports,
or
things
like
that
are
going
to
need
to
actually
be
per
node.
A
A
Don't
think
we
have
I
still
haven't
seen
a
reason
to
do
that
yet
I'm
sure
we'll
find
one
eventually,
but
I
also
think
it's
going
to
be
the
kind
of
thing
where
you're
doing
some
pretty
advanced
testing
and
you
can
either
you
know,
put
up
with
it
or
you
can
have
some
tool
reading
the
kind
types
and
generate
a
config
file.
They.
C
A
A
A
E
A
Also
for
the
most
convenient
thing
where
you
weren't
setting
these
anyhow,
we
probably
should
follow
up
on
Fabrizio's
PR,
where
you
have
flags
to
set
the
number
of
nodes
and
you
just
get
default
nodes
and
don't
set
any
of
the
fields
and
just
go
ahead
and
bifurcate
those
scenarios
where
either
you're
just
making
it
in
default
nodes
of
a
type
or
you're.
Actually,
configuring
notes,
and
if
you're,
actually
configuring
nodes,
you
can,
you
know,
add
an
entry
for
each
note.
F
E
A
Okay,
what
else
was
hittable
there
I
think
that
was
the
most
in
just
what
the
other
one
was
port
naming
which
I've
agreed
to
just
punt
I.
So
I
was
trying
to
get
a
like
engine
X
ingress
working
on
this,
and
you
can
kind
of
do
it
with
one.
It's
not
very
flexible,
I
think
we
probably
should
just
go
ahead
and
come
up
with
a
better
solution
than
that
I'm
willing
to
just
totally
drop
that
from
this
and
come
back
to
it,
the
good
ports
on
nodes.
A
So
so
yeah,
so
it
was
gonna.
Let
you
do
something,
like
you
add
some
specific
ports
to
a
node
and
those
get
actually
set
on
the
node
container.
And
then
you
run
like
you
run
something
on
host
ports
with
host
net
in
the
pond.
And
then
you
can
route
traffic
through
that
and
that
kind
of
works,
and
maybe
we
should
have
an
option
for
that.
A
A
So
pretty
similar,
so
the
main
thing
with
using
the
the
load
balancer
solution
we
have
now
is
the
same
thing
where
you
have
to
have
created
a
container
with
all
the
ports.
You
want
so
like,
if
you,
if
you,
if
you
like
you,
know,
deploy
something
and
you're
like
I
want
another
port,
then
we
need
to
like
rerun
the
container
or
something
like
that
or
you
need
to
have
specified
all
these
up
front
and
we
need
to
have
like
already
load
balanced
and
that's
kind
of
weird
for
kubernetes
to
have
like.
D
H
A
So
you
can
also
do
this
by
deploying
a
sokak
container
yourself.
The
the
real
tricky
thing
is:
how
do
you
like,
open
and
close,
more
or
less
ports,
or
that
sort
of
thing,
so
an
option
that
might
make
sense
there?
That's
similar
to
some
other
ones,
is
something
along
the
lines
of
there's
a
little
agent
you
run
at
it
like
opens
up
port
with
some
tunnel
and
and
so
then,
instead
of
a
load
balancer,
we
just
run
a
tunneling
agent,
like
maybe
it's
just
an
SSH
container
and
from
the
host.
A
We
can
open
tunnels
into
the
container
Network.
The
other
reason
that
that
kind
of
thing
would
be
nice
is
we
can
do
bi-directional,
so
you
can
talk
because
it
because
another
thing
that
we
haven't
solved
yet
is
I.
Have
a
workload
in
my
cluster
and
I
want
to
talk
to
an
external
service
and
I'm
gonna
mock
that
out
too
so
I'm
gonna
run
that
on
my
local
machine,
you
can't
talk
back
to
local
host
because
you're
talking
to
the
containers,
local
host.
A
A
A
A
A
Sure
off
the
top
of
my
head,
I
meant
it's
pretty
small
and,
if
not,
we
could
say,
oh,
it
needs
to
be
nano
or
something
tiny,
the
probably
the
more
yeah
we
already
generally
well,
the
saw
that
so
my
so
the
reason
I
want
to
discuss.
This
is
not
that
PR.
Concerning
me,
it's
that,
okay,
you
know
Aaron
over
here,
actually
loves
Emacs,
maybe
and
we're
gonna,
add
Emacs
to
image
and
they're
gonna,
add
man-in-the-middle
proxy
and
like
where
does
it
stop
right?
Now?
Where
things
stop
is
what
do
we
need
to
boot
kubernetes?
A
What
do
we
need
to
do
our
H,
a
clusters
and
that's
it
if
we
want
to
expand
that
boundary,
which
is
probably
reasonable
for
debugging,
we
I
think
we
need
a
new
boundary
and
or
not
is
I.
Don't
know
that
we
need
it
that
badly
for
this
peer
I'm.
Just
imagining
this
peer
setting
the
president's
for
the
next
PR,
where
it's
you
know,
Emacs
or
whatever
also.
E
A
True
right,
but
so
another
option
that
we
didn't
fully
explore
yet
is
having
an
alternate
image
that
includes
these
things.
My
concern
with
that
is
that
so
far,
we've
been
pretty
good
on
like
when
you
run
things
in
CI
or
when
you
run
them
locally.
They're
the
same,
even
to
the
point
of
things
that
we
need
to
do
for
mac
and
Windows,
we
go
ahead
and
do
on
Linux
like
using
a
local
host
instead
of
the
container
IP.
F
Just
so
I'm
not
fans
of
adding
developer
tools
into
into
the
image,
because
everyone
can
Judah
can
do
a
simple
docker
file
and
other
whatever
they
want.
Instead.
I
think
that
that
we
need
to
something
that
is
missing.
To
kind
is
the
possibility
to
build
some
node
image
variants,
for
instance,
for
adding
prepared
image.
F
A
For
today,
that's
actually
possible
just
not
documented
and
I'm,
not
sure
if
we
want
to-
or
maybe
we
want
to
make
a
separate
directory
for
this
or
something.
But
there
is
just
a
directory
that
contains
these
things.
You
don't
even
actually
need
to
build
an
image.
You
can
use
the
mount
paths
thing
to
add
a
you.
Can
you
can
mount
through
a
directory
with
these
star
balls?
So
actually
my
suggestion
there
is
for
that
particular
thing.
A
We
should
document
a
path
to
eat
mount
to
so
that
you
don't
actually
have
to
build,
but
for
the
more
general
I
want
to
build
custom
things.
The
node
image
supports
technically
any
base
image
you
supply,
so
you
could
take.
You
could
take
an
existing
base
image.
Do
it
from
add
some
stuff
and
maybe
that's
a
response
to
this
PR
as
well.
A
E
Thinking
about
like
the
base
image,
currently
we
use
we.
Basically,
what
was
the
name
of
this
tool?
We
take
the
the
base
image
and
we
place
it
inside
a
constant
in
the
source
code.
Yeah
so
I
was
thinking.
Can
we
add
an
intermediate
image
between
the
base
image
and
the
load
image
that
can
be
used
by
people?
Yes,.
A
So
you
can
make
a
modified
copy
or
we
can.
We
can
actually
just
point
you
to
like
here's
the
contrib
directory
that
takes
it
takes
an
existing
base
image
as
an
argument
with
the
default
and
lets
and
adds
a
bunch
of
like
tools
and
things
to
it.
And
then
you
can
tell
your
note
images
to
build
with
that
with
your
new
base,
image
I
think.
A
E
So
last
night
I
spent
like
slightly
more
desired
time
on
trying
to
get
the
system
the
Seagram
driver
to
work
and
inside
the
boat
container,
and
it
was
super
tricky
because
I
started
getting
errors
about
C
group
root
paths,
not
working
and
some
something
about
quality
of
service
in
the
in
the
couplet
and
I
started.
Googling
about
this
and
I
found
some
threads
from
2016
and
tomato
sailing.
It's
kind
of
tricky
to
get
this
working,
I'm,
sure,
I'm.
A
Yeah
I
also
have
some
concerns
about
this
in
turn,
like
I'm
we're
not
fully
following
system
these
recommendations
about
putting
system
D
in
a
container,
because
we
need
to
do
docker
in
a
container
as
well.
I
have
some
thanks.
Aaron
I
have
some
concerns
about
making
sure
that
this
inner
systemd
doesn't
interact
with
the
outer
system
need
if
we're
using
the
secret
driver
that
I
still
like
docker
itself,
tries
to
handle
this
for
docker
and
docker.
To
some
extent,
I
would
want
us
to
do
a
fair
bit
of.
A
F
E
A
A
E
Experiment:
experiment
more
on
this,
but
I
was
kind
of
stuck
after
four
hours
or
something
like
that
trying
to
get
it
to
work.
I
mean
maybe
I
can
revisit
this,
but
since
the
kind
call
it
Li
works,
maybe
we
can
just
put
this
in
priority
back
walk,
but
to
my
understanding,
the
problem
here
is
that
it
might
creates
like
unpredictable
failures.
E
A
A
F
F
Today
develop
at
our
prototype,
just
bringing
in
the
the
binary
for
then
for
the
target
release.
Then
you
already
have
what
you
need
in
in
the
action.
So
if
it
is
not
difficult
to
implement
the
upgrade
workflow,
even
in
mostly
master,
the
tricky
part
is
how
you
bring
in
the
target
version.
But
if
we
bring
in
binaries
I
guessed,
it
is
not
so
difficult.
So
in
my
opinion,
if
we
want
to
push
along
this
way,
we
have
to
stick
on
kind
and.
F
A
I
have
a
question
so
I
understand
that
we
can
implement
upgrades
and
I.
Think
it
might
like
you
said,
may
not
even
be
that
bad.
We
can
probably
put
it
in
kind,
but
probably
the
trickier
part
of
this
for
CIA
is
that
we
need
to
test
in
such
a
way
that
we
go
okay,
we
create
a
cluster,
we
run
tests,
we
start
upgrading.
A
F
This
is
another
topic,
so
they
would
you
manage
the
workflow
of
the
test.
This
is
your
question.
I
think
that,
for
now
we
can
do
what
we
can
do
is
install
cornetist
upgrade
and
around
interests
and
to
enter
tests,
and
this
is
simpler
because
we
we
just
some-
we
have
to
add
upgrade
after
in
it-
join.
C
A
A
F
F
Using
action
you
can
combine
action
together,
you
can
manage
workflow,
but
I.
Don't
think
that
this
responsibility
should
be
unkind.
It
should
be
on
Cooper
test.
I
commented
on
the
document
about
Cuba
test.
Due
to
along
this
way,
we
can
feel
into
an
interface
to
to
set
up
the
altar
steps
of
the
of
the
cluster
yeah.
A
A
A
Think
we
can
do
that
next
test
function
and
we'll
write
out
a
fax
to
a
certain
directory.
So
what
we
can
do
is
use
the
usual
scheduling
to
build
an
upper
cluster
and
things
and
then
the
first
step
of
test
is
run
the
alter
and
you
can
have
that
we
can
have
a
special
like
kind
upgrade
test
type
in
the
kind
keep
test.
F
I
E
The
this
cycle
ends
I
wanted
to
start
planning.
How
can
the
upgrades
matters
and
the
you
know?
The
upgrades
are
already
doable
with
the
current
duration
of
time,
because
you
can,
you
can
execute
Oracle
commands
and
you
can
copy
the
binary
to
the
load
you
want
to
upgrade,
but
it's
more
of
a
question
like
should
we
should
keep
the
the
kind
deployer
what
whether
what
it's
supposed
to
be
only
up
down
tests
they
set
up
built
or
should
extend
it
to
my
opinion.
We
shouldn't
extend
it
and
we
should
rely
instead
of
the
tests.
E
E
We
can
provision
one
as
well,
but
we
can
then
execute
some
sort
of
a
command
which
could
be
external
to
kind
and
to
basically
modify
the
nodes
in
such
a
way
that
you
can
perform
operate
and
like
for
bridge
we're
saying
we
can
test
only
the
resorted
cluster,
because
we
already
have
signal
for
the
original
file.
Our
suggestion.
A
For
happened,
what,
if
we
break
it
up
like
this,
so
so
to
be
clear,
I'm
working
on
cube
test
two!
Now
it's
at
the
point
where
I
should
be
able
to
start
writing
an
actual
tester
and
a
deployer
for
kind
like
today.
I
am
extremely
interested
in
being
able
to
develop
kind,
CI
signal
without,
depending
on
all
of
the
legacy
we
have
going
on,
and
every
other
e
to
be
job.
The
simplest
and
first
tester,
we're
going
to
add
is
one
that
just
you
run
a
command
like
that
is
the
args.
A
All
of
the
artists
of
the
test
are
just
some
command
that
you're
running
in
it
Sarge.
So
if
we
implement
that
and
we
implement
a
kind
of
player
that
does
build
up
etc,
which
should
be
pretty
quick
to
borrow
from
the
existing
one
in
old
cube
test
because
they
have
fairly
similar.
If
more
clarified
interfaces,
we
can
have
cubed
us
to
do
build
provision
etc,
run
a
command.
A
E
I
A
B
A
D
F
For
kinder,
and
basically
it
is
a
library
built
on
top
of
kind.
It's
why
I'm,
not
forking-
and
this
is
intentional
and
I
already
have
the
discovery
part.
So
he
reviews
the
code,
the
config
starting
from
the
node
data
after
exists,
and
then
I
can
I
have
an
action.
If
you
want
I
can
show
you
how
it
works.
The
end
disease-
and
this
is
necessary
if
you
use,
if
you
use
kind,
if,
if
you
think,
to
invoke
kind
on
an
existing
cluster.
E
Yeah,
so
what
we
can
do
now
is
start
adding
this
into
the
alpha
of
kind
like
an
alpha,
sub
command
of
kind,
and
eventually
we
can
bring
it
bring
it
to
a
ga
functionality
in
kind.
If
you
want,
or
if
you
think
that
this
is
completely
going
out
of
scope,
we
can
then
break
it
apart
in
move
it
to
the
cube
ATM
repo,
again
I
thought.
I
E
A
F
A
F
A
E
The
Dom
for
pantherama
has
worked
very
well
for
us
in
cube
ADM,
so
I
think
we
should
try
the
same
kind.
You
can
like
I
said
earlier
in
at
any
point
in
time.
You
can
decide
to
keep
this
out
from
the
project
and
or
bring
this
into
the
main
functionality.
So
this
is
what
we
can
do.
Okay,
I
guess
the
only.
E
E
A
E
F
I
H
E
E
E
F
D
A
So
I'd
like
to
try
to
squeeze
in
one
last
thing,
the
ipv6
work
looks
really
cool
and
like
something
we
want
to
test,
especially
considering
that
kubernetes
has
no
CI
for
this.
Currently
we
found
that
weave
is
not
going
to
work,
because
ipv6
I
like
to
ask
what
people
think
about
switching
to
calico,
so.
D
E
A
F
F
A
A
Yes,
so
that's
the
main
thing
is
we're
not
trying
to
support
that
sort
of
thing.
I
mean
today
you
can
do
something
once
we
you
know
document
ways
to
load
images
without
without
load
that
the
mount
thing
or
building
a
custom,
node
image,
then
the
only
other
thing
you
need
a
mount
is
the
manifest
and
just
overwrite
it,
and
we
already
support
swapping
in
a
different
technically
is
already
possible
to
use
a
different
overlay
today.
E
A
Yeah
so
I've
been
fairly
happy
myself.
That
said,
I
haven't
explored
many
of
them,
so
I
wanted
to
ask
around
I.
Think,
given
the
ipv6,
like
seems
like
a
high-value
thing
going
forward,
we
should
give
this
a
shot,
especially
now
that
they're
actually
supporting
arm.
That
was
one
of
the
biggest
reasons
that
I
selected
we've
initially
was
that
they
had
painless
support
for
multiple
architectures
out
of
the
box,
including
multi
architecture
clusters.
If
that's
your
thing,
which
we
don't
actually
need
for
kind,
but
that
shows
you
how
painless
it
is.
Yes,.
A
A
A
Okay,
I
will
probably
even
move
forward
with
that.
Then,
unless
someone
else
wants
to
PR
at
first,
it
might
be
a
little
bit
easier
since
we
probably
need
to
push
images
and
things.