►
From YouTube: sigs.k8s.io/kind 2019-02-18
A
B
B
Actually,
the
caning
kind
was
his
idea,
basically
because
for
those
were
not
familiar
really
the
ideas
that
have
been
playing
with
the
idea
of
using
kind
as
a
cross,
the
provider
for
testing
the
cross
or
API,
which
basically
means
deploying
a
controller
that
will
call
time
for
providing
another
cluster
I
mean
in
a
kind
crosser
will
be
a
controller
running.
They
will
deploy
another
dependent
cluster,
that's
basically
they
will
not
be
a
later
on
and
that's
the
most
interesting
ideas
and
later
on.
B
We
will
also
be
able
to
add
notice
to
an
existing
cluster
use
the
same
mechanism,
so
that
basically
means
that
the
controller
who
is
running
in
a
kind
cluster,
which
is
basically
running
in
a
docker
container,
which
is
an
old
and
he's
running
a
docker
inside
this
controller
needs
to
talk
to
the
outside.
Docker
I
mean
to
create
another
node
controller,
for
instance,
or
another
grosser,
so
I
would
have
suggested.
My
initial
idea
was
to
run
this.
B
Controller,
not
for
testing
not
inside
the
kind
closer
but
upside.
Just
listening
just
to
prevent
this.
This
issue.
Did
you
look
at
the
document
I
pointed
out
to
in
the
in
my
common
over
the
slack,
and
also
in
the
issue
that
I
created
for
this
topic
that
I
like
to
option,
if
which
one
was
for
me,
the
less
convenient
but
easy
in
the
center.
B
That's
one
and
the
second
one
is
that,
then
we
need
to
change
kind
to
tell
which
stoker
socket
to
talk
to,
because
by
default
we'll
try
to
call
the
local
doctors,
because
it's
basically
the
cocoa
doctor.
So
it's
like
two
changes
that
the
first
one
is
kind
of.
Okay,
acceptable,
but
the
second
one
is
so
at
all
that
was
I,
saying
it's
ugly
first
I
know
it
is
very
specifically
skate
introducing
and
one
additional
configuration
or
flag
or
everything.
B
However,
minions
have
been
available
able
to
find
out
any
other
solutions,
so
I
just
wanted
to
bring
the
the
comedy,
because
I
want
to
move
out
on
the
design
me
a
similar
deployment
able
to
find
this.
What
I
want
to
move
on
and
say
well,
is
this
really
make
sense
of
that?
So
basically
wanted
some
ideas
of
renewer.
So
your
feedback
home
on
this
particular
issue
yeah,
so.
A
B
A
Something
on
the
host
is
not
for
great,
because
now,
abs,
deep
but
potentially
this
thing
that's
running-
is
no
a
cluster
API
thing
as
opposed
to
ours.
On
the
other
hand,
that
does
make
having
the
bootstrap
cluster
a
little
weird
if
you're
like
also
running
a
component
outside
of
the
basically
a
controller
outside
of
the
cluster.
On
the
other
hand,
I
think
that's
kind
of
what
a
cloud
provider
looks
like
you
know.
You
have
something
running
somewhere
that
you
talk
to
you,
that's
outside
the
cluster.
Actually.
B
B
So
it's
kind
of
Weibo
the
ride
to
model
at
the
same
idea
that
you
don't
really
need
to
run
deep
plate
or
inside
the
cluster,
because
you
said
it
can
be
something
that
the
running
outside
I
mean
you're
in
a
unit
for
structure,
so
I
have
pastila
the
the
tunnels
you're
following
this
path,
but
as
these
work
hi
in
there
are
some
interest
in
the
close
to
API
community.
For
this
use
case,
I
need
to
check
with
them.
Is
this
acceptable
for
them?
B
A
B
C
B
No
actually,
after
this
one
I
wanted
to
make
it
a
crime
machine
position
over
the
Commission
traditional
Nick's,
the
dough.
The
kind
which
is
city
host
me
ring
is,
if
you
had
it,
you
have
the
you
had
the
controller
and
not
only
what
I
actually
split
leads
into
the
controller
and
up
to
eight
did
the
control
is
run
indefinitely
in
the
cluster.
We
actuate
to
the
actual,
provide
the
dependent
piece
that,
in
this
case,
would
call
kind.
I
cannot
make
it
round.
B
B
B
A
So
mounting
things
is
something
I've
actually
been
working
on.
Our
hopefully
landing
soon
might
also
be
an
interesting
topic
for
discussion,
but
like
in
the
mountain
for
the
problem
of
mounting,
things
is
like
knowing
what
come
out
and
then
also
then
it's
on
the
cluster
API
to
like
provision
a
kind
cluster
that
supports
this
use
case.
I,
think
that
might
I,
don't
know
if
that,
like
at
what
point
that
makes
sense
running
the
actuator
outside
seems
like
the
sanest
approach
for
now.
Otherwise
we
might
get
into
some
really
bad
situations
where
nested
dr.
A
too
many
levels
deep
and
things
start
acting
up.
I.
Imagine
particularly
given
that,
like
the
like,
for
example,
to
see
groups
with
v1,
which
is
what
all
the
container
runtimes
using
right
now
can't
actually
be
nested,
they
can
kind
of
be
mounted
adjacent,
so
I'm
not
really
sure.
Also
networking
gets
strange
because
you
have
like
network
name
spaces
and
namespaces
and
like
overlapping
at
space.
A
A
That
said,
we
have
had
some
funny
issues
with
like
mounts
and
cgroups,
and
things
for
like
running
kind
on
kubernetes
are
just
with
a
normal
cluster
like
gke
or
something
I,
definitely
think
they're
they're
worth
looking
in
to
but
I'm
I'm,
pretty
sure
that,
like
most
reliable,
one
is
going
to
be
the
actuator
on
the
host.
Assuming
the
actuator
is
not
going
to
like.
A
D
B
A
So
I
think
it
is
maybe
slightly
worse
UX
to
need
to
run
the
actuator,
but
on
the
other
hand
like
I
feel,
like
that's
kind
of
close
to,
like
you
know,
turn
on
whatever
your
cloud
thing
is
and
then
run
the
cluster
API
and
exactly
if
it's
a
real
cloud,
then
you
just
get
a
you
just
get
an
account
or
something
but
we're
kind
of
like
ok,
you
don't
need
to
make
an
account,
but
you
run
this
binary.
Ok,
that
seems
pretty
reasonable.
I
would
probably
try
that
approach.
First,
the.
A
B
That's
you
than
the
other.
That's
the
other
point
that
I
not
really
me
not
as
as
long
as
I
can
communicate
from
the
controller
to
the
crater,
which
is
running
the
side.
But
that's
you
mention
early
in
the
Common
Era
of
around
this
threat
of
exposing
boards
CDC
the
opposite.
Abyss
is
calling
for
inside
to
the
side.
I
understand
that
that
works
have
a
try
it
out
by
the
way,
but
so.
A
B
A
A
B
B
E
A
This
this
may
be
a
good
way
to
find
out
what
the
problems
that
are,
there
I
think
I
think
in
general.
What
we're
going
to
wind
up
having
to
do
is
something
like
a
well
supported
path
to
like
a
SSH
tunnel
between
the
post
and
there,
so
just
so
that
you
can
like
open
arbitrary
ports
and
not
have
to
run
like
a
new
container
for
each
port,
and
so
you
can
talk
in
pulse
directions.
A
Because
then,
because
then
we
could
do
something
like
on
all
of
the
nodes.
We
open
a
tunnel
back
out
to
the
host
machine
and
there's
some
port
you
hit,
and
that
actually
goes
to
a
local
host
on
the
outside
and
and
that
works
both
ways
and
it's
pretty
flexible
on
how
many
ports
and
what
I
just
not
sure
how
what
the
easiest
route
is.
Yet.
B
A
B
B
A
A
B
A
B
A
All
right
I
need
to
go,
look
through
the
end,
but
we
could
wrote.
We
could
have
something
that
runs
that
that,
like
those
tunnels
of
some
form
on
the
host
and
then
there
are
various
ways
we
can
put
it
in
in
the
network,
we
could
like
have
a
container
that
just
does
tunneling
or
okay.
It's
like
a
separate
node
or
we
could.
We
could.
Maybe
even
we
could
like
schedule
a
deployment
to
the
cluster.
That
is
like
a
daemon
set
on
every
machine.
A
That
opens
all
of
the
ports
that
we're
tunneling
or
something
like
that,
but
you
need.
We
need
to
know
that
in
advance
I
mean
wish
wish
for
well.
We
could
reschedule
the
tape
and
said
right,
oh
and
on
the
host
site.
We
don't
need
to
know
the
ports
cuz.
We
just
have
a
binary
that
opens
and
closes
ports
as
needed.
A
A
B
A
A
B
We
take
a
look
at
how
we
are
gonna
do
this,
because
the
way
they
had
a
very
where
architecture-
and
they
actually
have
the
use
public
butyl
network
to
reach
the
grocery
the
observer
of
the
cross,
serve
because
they
they
have
like
a
closed
circle
management
and
they
have
the
communicate.
The
communication
they
use,
probably
p.m.
so
they
are.
A
E
A
B
Mik
Mik
site
made
a
lot
of
sense.
Okay,
okay,
I
would
I
would
keep
on
working
on
that
direction
because
he,
for
you
makes
some
sense.
I
will
check
back
with
the
other
people
that
has
expressed
some
interesting.
The
idea,
or
using
Chi
as
a
processor
on
machine
provider
for
cluster
and
I,
see
what
they
think
about
the
idea
of
running
beside
it
or
just
explain
why.
But
it
makes
more
sense,
because
it's
not
only
so
that
you
try
to
design
that
the
worst
time
that
figured
I
was
saying.
Hey
I
have
a
problem
here.
A
C
A
A
C
A
A
In
the
note
image
like
for
example,
today
we
don't,
we
don't
assume
anything
about
kubernetes
and
said
we
actually
like
kind
of
asked
the
note
image,
like
what
kubernetes
version
do
you
have
I'd
like
to
do
the
same
thing
for
runtime
and
CNI,
and
that
sort
of
thing,
and
as
part
of
that
I'm
also
looking
at
I'm,
not
sure
if
docker
is
the
right
CRI
by
default
long
term.
It
is.
C
A
What
that's,
basically,
what
I'm
thinking
but
I
am
thinking
that
that
transition
does
probably
look
like
and
like,
for
example,
I,
believe
the
cluster
API
is
using
container
T
I
believe
that's
kind
of
the
transition
force,
there's
some
nice
things
that
we
can
do
if
we're
specifically
targeting
that
around
things
like
mirroring
and
caching
that
are
starting
to
land.
It's
pretty
it's
designed
to
be
a
pretty
pluggable
and
terrifically
opinionated,
but
I
think
the
slow
transition
path
we
can
do
for
now
is
the
doctor
itself
is
starting
to
get
the
container
D
underneath
I
believe.
A
B
A
B
A
Or
somebody
good
one
and
also
be
I'm,
not
actually
expecting
us
to
be
super
great
for
testing
that
exactly,
but
it
will
be
really
useful
for
things
like
the
the
socket
detection
are
passing
through
that
correctly.
If
things
like
that
that
are
just
like
at
the
kind
of
interface
between
the
Buddha
and
the
CRA
for
actually
testing
integration
with
Sierra,
we
definitely
need
real
machines.
C
C
A
A
F
You
have
to
share
a
key
between
the
master
and
the
node,
so
when
you
do
anything
you
do
could
mean
in
it
upload
the
search
and
NDE
right
to
the
certain
in
a
secret
in
the
cluster.
But
you
know-
and
it
gives
you
an
encryption
key
when,
when
you
join
you
you
make
could
mean
join
me
knows
me,
knows,
download
the
search
and
did
in
decryption
key.
A
I
mean
this
definitely
sounds
like
something
that
we
should
be
trying,
given
that
it
works,
though
I
don't
think
that's
like
the
you
know,
the
most
important
thing
right
now.
We
have
a
lot
of
other
work
that
could
be
is
super
useful
on
cluster
creation.
It
kind
of
like
something
else,
I,
don't
think
about
in
general.
The
like
the
reason,
almost
all
of
that
is
just
in
an
internal
package
right
now,
is
that
the
abstractions
are
pretty
weak.
It
basically
just
has
access
to
the
entire.
A
Context,
I
think
we
probably
want
to
do
some
other
fairly
major
refactoring
of
route
up.
Similarly
right
now,
you
also
pay,
if
the
number
of
nodes
that
you
have,
which
we're
starting
to
notice
as
we're
doing
more
see
I've
kind.
You
pay
in
serial
boot
up
time,
for
how
many
you
have
I
think
we
can
probably.
A
F
F
F
A
I
think
we're
going
to
need
to
have
another
discussion
at
some
point
to
think
more
about
like
how
we
handle
doing
alpha
things,
because
probably
we
want
some
path
for
testing
where
we
can.
You
know,
do
all
of
the
alpha
things,
but
for
like
a
cluster
API
bootstrap,
we
probably
don't
want
to
be
shipping
clusters
that
are
using
alpha
everything
all
the
time.
F
No
no
I
agree.
I
think
that
the
FEM
CD
about
unkind
or
kind
there
is
the
way
out
for
those
situation.
I'm
experiencing
I'm
trying
to
prototype
I
think
that
my
impression
is
that
they
kind
the
library
is
already
good
and-
and
maybe
the
next
Monday
will
show
something
to
you.
If
you
have
time,
I
have
three
two
or
three
points
were
needed.
F
The
current
kind
library
needs
to
be
a
standard
in
order
to
support
the
two
to
cover
modules
case
outside
as
a
library,
but
they
are
really
small
changes,
so
they
are
not
impacting
and
they
are
and
I
think
that
they
can
be
accepted.
My
my
biggest
problem,
or
my
biggest,
let
me
say
aria
to
be
addressed
is
the
great
part
because
I
agree
with
you.
It
will
be
great
to
find
a
way
to
have
the
current
Queen
will
kinda
build
apart,
more
modular,
so
I
can
reuse
to
build.
D
A
We
probably
actually
want
to
go
on
a
route
where
we
don't
necessarily
need
to
build
the
Custis
where
we
could
take
two
boats
because
for
CI
purposes
are
pretty
soon
we're
going
to
want
to
stop
having
every
kind
job
do
a
build
and
just
pick
up
like
put
push
to
build
somewhere
and
pick
up
a
build,
and
we
do
this
pretty
heavily
for
every
other
way
that
we
test
kubernetes.
Just
to
avoid
that,
like
from
the
CI
clustered
point
of
view,
one
of
the
most
expensive
things
we
can
do
is
build
kubernetes.
A
C
Can
I
talk
with
them
about
something
else
to
interject
another
another
like
feature
request
that
has
been
out
there
for
a
while
and
that's
provisioning
the
kind
nodes
without
without
starting
Commedia?
And
this?
If
we
do
this,
we
can
basically
exploit
the
users,
can
then
decide
what
Flags
to
run
on
the
kind
nodes,
I
I.
A
Do
think
we
should
do
that.
I
would
like
to
see
some
more
thought
put
into
like
so
something
that
I
want
to
be
able
to
stay
away
from
and
I'm
not
sure
how
to
make
reconcile.
So
it
is
having
lots
of
lots
of
disjoint
flags
where,
like
oh
whoops,
you
specified
this
flag
now.
Everything
else
is
invalid
and
this
kind
of
feature
is
going
to
like,
for
example,
the
weight
flag.
A
Do
we
have
today
I
mean,
and
maybe
the
answer
is
we
should
move
some
of
those
things
out
of
flags
or
I,
don't
know,
but
this
one
will
definitely
be
a
disjoint
another
disjoint
Flags,
where
things
like
waiting
or
whatever
functionality
you've
specified
for
the
C&I
are
not
going
to
work,
because
we
can't
do
this.
We
can't
do
those
things
until
after
a
key
bit
in
this
room.
A
Think
we
should
do
it
I,
just
basically,
rather
than
going
out
today
and
adding
a
flag.
I
could
see
some
thought
about
like
how
we
can
solve
this.
I
mean
this
is
a
more
general
probably
have
we
have
another
one
with
the
look
like
where
you
can
set
the
number
of
nodes
with
flags,
we
I
think
we
need
to
adopt
some
kind
of
strategy
for
how
we
want
to
say
this
is
a
flag.
This
is
config.
A
This
is
how
we
deal
with
having
flags
our
destroy
I
mean
another
route
we
can
do
is
say
like
a
different
sub
command.
We
could
have
like
I,
don't
know
when
you'd
name
it
but
cluster,
something
or
just
to
clarify
that,
like
this
has
different
behavior
at
a
top
level,
as
opposed
to
like
50,
different
knobs
on
the
same
call
or
yes
and
maybe
50
different
knobs
on
the
same
call
as
the
right
answer.
I
just
think
we
should
explore
it
a
bit
before
we
do
it
so.
C
You
remember
when
we
started
adding
all
these
flags.
Ie
I
pointed
out
to
you
that
we
should
stop
now
yeah.
Basically,
the
configure
kind
is,
you
know,
is
a
fairly
small
one.
We,
we
can
technically
start
forbidding
stuff
from
the
command
line
and
start
feeding
something
to
the
config
right
now
we
can
do
this
already,
and
we
should
do
it
now,
because
if
the
project
is
fairly
young,
if
we
later
yeah
yeah.
A
I'm
kind
of
leaning
that
way
myself
I
think
we
can
I
think
we
can
solve
a
lot
of
these
problems
way
and
I.
Think
it's
a
lot
easier
to
like
explain
and
validate
like
and
structure
config
that
is
disjoint
like
you
can
have
like
an
entire
sub
structure
of
config
that,
like
is
associated
together
and
like.
Maybe
you
can
only
specify
one
or
the
other
something
like
how
kubernetes
doesn't
volumes
but
I,
don't
know.
I
I
do
think
that's
another
discussion
that
we
need
more.
A
A
F
We
at
this
point
to
having
the
capability
to
start.
No,
the
mayor
knows
ready
to
run.
Kubernetes
is
something
that
is
one
of
the
two
problems,
the
two
missing
things
that
I
saw
in
the
library
and
what
I'm
expecting
is
that
kind
exposed
T's
in
the
library
I'm
not
expecting
this
that
this
kind?
That
kind
will
surface
these
in
depth
in
the
UX
may
be
in
the
config.
Mataji's
are
up
to
you.
What.
F
A
Mean
so
I
could
almost
see
us
like
as
it
mature
like.
Maybe
we
only
do
in
the
library
now,
but
as
it
matures.
Maybe
what
we
could
do
is
something
like
cube:
Adams
phases,
I,
don't
think
we
have
enough
of
an
idea
of
what
the
end
state
looks
like
to
declare
the
phases
today,
but
I
think
we
probably
could
eventually
say:
okay,
there's
a
phase
for
like
creating
the
nodes,
there's
a
phase
for
actually
running
cube,
a
demand,
there's
a
phase
for
like
post
Q,
Batum
setup.
A
F
What
I
was
trying
to
say
that
okay
I
think
that
explain
these
to
the
user
raise
all
the
programs
are
saying
we
want
to
give
a
clean
surface.
Why?
If
we
we
stick
on
the
library.
Currently
they,
the
the
user
of
the
library,
are
very
few
and
and
and
we
can
clearly
state
that
the
library
is
it
is
an
alpha
state
and
I
kind
of
set,
as
is
being
a
part
of
this
group,
that
elaborate
can
change.
Yeah.
B
B
That
is
like
keeping
the
lining
of
the
kind
coma-like
for
the
main
use
case,
which
is
creating
a
cluster
and
finally
and
I
suppose
in
everything
else,
as
an
API,
because
otherwise
is
really
ugly
in
me
because,
for
instance,
for
my
use
case,
I
need
to
get
access
to
stuff.
Like
me,
the
dude
I'm
talking
so
I
can
later
on
that
additional
notice
to
the
cluster
and
join
the
key
diem.
So
how
do
you
do
that?
In
the
user
interface
in
a
caminos
from
Minos.
A
I
totally
agree:
I
just
want
to
acknowledge
that
at
some
point
some
of
these
things
might
also
make
sense
the
surface
in
the
user
interface
and
something
like
provisioning
their
nodes.
You
know,
I,
think
enough
like
say
to
Batum
developers,
would
use
this
that
it
probably
is
eventually
worth
it.
A
I'm
not
sure
what
it
should
look
like,
though,
and
it
would
be
good
to
explore
that,
because,
because
that
comes
in
some
of
the
same
intersection
of
like
okay,
we
should
move
this
into
config
and
like
how
should
we
structure
this
so
that
we
have
room
to
add
more
options
and
and
avoid
making
it
easy
to
write
invalid
configs.
The
same
thing
with
flags
avoid
making
it
easy
to
write
what
looks
like
a
valid
command
line,
but
won't
work,
because
oh
you've
specified
foo
and
part
it.
You
can't
have
those.
C
A
C
So
just
we
move
to
the
last
topic
because
I
see
James
self
abrasive
over
here
and
although
the
we
are
best
bit
of
the
PR
that
integrates
the
CRI
API
as
part
of
the
behind
Kathy,
not
exactly,
but
he
made
a
copy
of
a
certain
object
from
the
CRI
API,
so
I,
first
of
all,
I'm
not
sure,
like
both
orbits,
you
and
James
understand,
like
API
machinery,
normally
better
than
me.
So
the
question
here
is
like
we
are
exposing
to
the
user,
something
that
is
alpha
alpha
fields,
even
if
you
are
making
a
copy.
A
Yeah,
that's
it
additional
thing
is,
there's
actually
two
PR
is
open
and
they
have
different
routes.
One
is
that
we
actually
take
the
upstream
types
and
we
embed
them,
and
we
try
to
do
something
with
that.
The
other
is
that
okay,
we
make
the
upstream
type,
but
we
don't
guarantee
that
it's
exactly
in
the
upstream
type,
we're
using
us
for
on
disk
and
ideally
eventually
it
can
actually
be
the
upstream
type
and
memory
as
well
and
that'll
be
less
work
for
us
to
shim,
but
I
guess.
Additional
context
is
that
technically
CRI
is
V.
A
1
alpha
2,
still
it's
not
clear
when
they
will
actually
move
to
beta.
It's
been
pretty
stable.
There
have
been
a
few
things
that
broke
most,
that
they've
made
breaking
changes,
most
of
which
I
don't
think
are
things
that
we
touch.
Also
specifically,
this
type
is
mounts
which,
like
hasn't
changed
at
all,
for
like
two
or
three
years
now,
but
that
doesn't
mean
the
other
types
that
we
use
won't,
and
it's
not
clear,
like
you
know,
should
we
be
working
the
types
or
not
I
have
found
one
use
case
for
this.
A
That
you'll
find,
if
you
dig
through
the
the
code,
a
bit
where
the
on
disk
format,
for
that
type
is
not
for
enums,
for
the
generated
proto
code
is
not
great
because
it's
just
the
inner
values,
as
opposed
to
the
named
enum
values.
So
those
are
talking
about
mount
propagation
instead
of
saying
bi-directional,
it's
like
who
or
something
so
we've
implemented
custom
serialization
to
use
the
enum
names
instead,
which
already
makes
it
technically
not
the
same
as
the
upstream
json
type.
D
Yeah
I've
just
been
taking
another
look.
I
know,
I
tried
earlier
I
think
it
looks
good
I
like
the
idea
of
sticking
to
CRI,
for
it
I
think
what
you
were
saying
about
like
copying,
the
API
versus
vendor
e,
or
whatever
is
so
traditionally
with
most
API
machinery
things.
The
advice
is
always
to
I
think
at
least
know
you
don't
tend
to
import
another
API
but
like
another
random
type
from
but
isn't
specifically
crafted
to
be
keeping.
D
It
is
API
type
as
well,
because
this
is
use
David,
G
RPC,
it's
kind
of
it
looks
like
just
taking
a
quick
glance
I'm
going
to
deep
look,
it
looks
like
it
is.
You
know
it
G,
RPC
type,
so
I
personally
would
feel
like
actually
just
writing
our
own,
basically
identical
abstraction
on
top
might
make.
It
might
also
insulate
us
from
any
breaking
change.
They
do
make
and
at
least
give
us
a
little
bit
more
to
work
with.
Where,
like
we
realign
ourselves,
yeah
and
then
I
think
the
only
other
thing
that
I
would
add.
D
A
So
on
that
note,
would
you
think
that
so
the
idea
I
had
for
this
is
take?
Take
what's
there
but
add
another
commit
where
we
drop
the
pretense,
that
this
is
just
the
cut
and
paste
up
stream
type
I've
also
thought
about
making
a
auto
generator.
I
think
we
can
do
that
later,
but
essentially
it
should
be.
The
processes
takes
the
upstream
type
copy.
A
It
fix
the
JSON
to
actually
be
camelcase,
add
custom
serialization
for
enum,
so
they're,
not
terrible
that
sort
of
thing
so
that
you're
getting
this
same
structure
for
sure
you
did
borrow
that
to
get
all
the
fields
and
everything.
So
if
you
look
through
the
commit
history,
it
will
be
clear.
Ok,
we
borrowed
this
type
at
this
point
in
time,
but
the
immediate
next
step
is:
don't
try
to
keep
it
exactly
that
type,
don't
pretend
it's
that
type,
we're
just
mimicking
it
very
closely.
A
A
We
have
that
implemented
today,
but
it's
actually
using
the
generated
enum
values,
which
are
also
still
bad
they're,
like
all
caps
host,
underscore
to
container
I
think
what
we
can
do
is
have
a
commit
where
which
add,
after
this
call,
if
we're
agreed
on
this,
where
we
actually
use
what
you
would
see
in
the
playset
and
the
real
kubernetes
api,
there
are
some
human
friendlier
names
that
map
to
the
to
the
actual
amount,
propagation
values,
and
we
can
have
the
alias
serialization.
Do
that.
I
guess.
D
D
A
C
Wanted
to
ask
you
like
in
terms
of
mount
propagation
and
mounts
like
exposing
the
functionality
to
the
common
kind
user,
so
you
know
in
qadian
we
did
a
basically
we
have
support
for
mounts,
but
it's
very
primitive.
We
don't
expose
underlying
type.
We
basically
support
a
bunch
of
skittish
things
for
a
mount
and
it's
it's
a
fairly
sandboxed
approach
to
what
the
users
can
do
with
the
mounts.
Do
you
think
that
we
should
do
something
like
that
instead
and
like.
E
A
I
mean
so
out
of
the
CRI
type
fields.
I
think
all
of
these
are
things
that
I've
seen
are
a
problem
with
container
easing
stuff
that
you
need
to
set.
You
might
need
to
disable
selinux
relabeling,
depending
on
what
you're
trying
about
you
might
want
to
mark
it
read-only.
So
that,
like
you,
know,
you're
mounting
your
source
code
through.
Maybe
you
don't
want
to
allow
the
things
in
the
container
to
rewrite
your
source
code
and
mount
propagation
is
maybe
the
only
one
where
it's
not
entirely
certain.
A
If
you
need
it,
but
I,
don't
think
it
really
hurts
us
much,
because
we
are
just
supporting
the
same
thing
as
what
you
see
in
kubernetes
api
x',
and
we
know
this
design
has
proved
enough
over
a
while
that
we're
not
reinventing
it.
I
mean
we
could
drop
that
field.
I
I
think
that's
just
kind
of
like
removing
possible
functionality.
Yet
that
costs
us
almost
nothing.
A
C
A
C
A
The
internal
type
is
not,
but
it
matches
very
closely.
These
are
essentially
just
the
volume
mounts
that
you
have
on
your
container
okay,
but
there
is
no.
There
is
another
type
fits
in
technically
I
believe
there's
another
type,
that's
doing
an
alias,
that's
actually
in
the
pod
spec,
as
opposed
to
see,
what's
in
CRI,
because
I
think
that
actually
maybe
predates
CRI.
C
A
C
I
understand
I,
just
don't
trust
the
kubernetes
api
now
for
types,
because
you
know
queue
proxies
alpha
it
in
some
it
pretty.
So
it's
going
to
break
for
a
lot
of
people
pretty
strong
members,
yeah
yeah,
because,
like
sequence,
lifecycle
in
the
component,
config
people
are
gonna,
start
refactoring,
the
do
one
now
for
one
of
proxy
and
because,
if
it
was
delayed
in
alpha
for
such
a
long
time,
it
gathered
a
lot
of
consumers.
And
you
know
we
now
we
have
to
break
it.
Well,.
A
I
will
say:
I
hear
that
concern,
which
is
why
I
think
I
would
like
to
go
with
the
second
round,
where
we
just
copy
the
type.
So
we
can
borrow
the
ideas
without
depending
on
the
types
I
hope
that
long
term,
we
can
actually
replace
these
types
with
a
with
a
type,
alias
and
and
just
put
it
put
serialization
aliases
on
them,
but
I
don't
want
to
depend
on
that.
A
I
will
also
say:
I
spoke
to
some
of
the
people
that
work
on
this
offline
and
yeah
they're,
not
really
they're,
not
ready
to
commit
to
the
time
and
not
being
allowed
to
break
in
this
off
like
making
this
beta
and
exporting
it
just
yet.
That
said,
they
also
really
don't
want
to
see
it
broken,
because
they
also
maintain
implementations
and
they've
been
pretty
hesitant.
Make
breaking
changes
in
the
past.
There's
been
about
one.
A
And
I
think
things
like,
so
the
other
thing
is
I
think
large
parts
of
the
CRI
we're
never
going
to
expose
to
the
user
like
we're,
not
gonna.
Let
you
configure
the
entire
container,
probably
that's
something
that
we
generate
internally
but
like
the
DNS
and
the
host,
and
maybe
network
configuration
or
things
like
that
are
probably
pretty
safe
to
expose
things
like
disks
or
whatever.
Don't
really
belong
in
kind,
I.
Think!
Yes,.
A
C
C
Now
this
week,
I'm
going
to
start
adding
some
of
the
other
key
medium
branch
tests
to
use
the
new
deployer.
Thank
you
very
much
for
helping
me
with
this.
If
you
didn't
have
this
weekend,
I
was
probably
gonna,
be
stuck
completely
figure
out
because
it
was
super
tricky
with
the
artifacts,
especially
well.
A
Yeah
I
wanted
to
see
that
fix
anyway,
but,
while
we're
being
honest,
I
also
realize
I
things
like
cue
test,
two
that
were
kind
of
itching
at
me
that
I
haven't,
you
know,
fully
landed.
Yet
it's
a
good
time
to
land
these
things
I
have
to
write
up
my
yearly
perf
stuff
this
week.
That's
going
to
be
nice
to
say,
hey,
look.
We
have
the
QA
team
thing
actually
working
now,
as
opposed
to
like
it's
almost
there
so
yeah.
If
you
had
any
more
blockers,
let
me
know
it.
It
helps
me
to
yeah.
A
Do
once
it's
a
little
bit
more
stable
I
noticed
we're
having
some
kind
of
range
issue
work
log
dumping
sometimes
does
it
work
just
what
like
once
or
twice
I
hadn't
had
a
chance
to
find
out.
Why
and
that's
not
something
that
we'd
been
seeing
any
other
one,
but
I
do
think
long
term.
We
should
be
switching
both
the
other
reason.
I
haven't
switched
everything
yet
is
hopefully
going
to
be
picked
soon.
A
The
pot
utilities
which
are
what
we're
using
on
those
newer
jobs
they
are
not
providing
the
committe
metadata
as
well,
which
is
somewhat
important.
We're
testing
I
think
you
test
actually
already
fixes,
as
though
but
I
we
need
to
confirm
that
the
commits
are
showing
up
in
test
grid.
Before
we
switch
that's
going
to
be
problematic
for
things
like
blocking
jobs,
it's
also
hopefully
getting
fixed
in
poly
tools.
A
Right
now,
I
know:
Sun
is
working
on
it,
but
I've
been
hesitant
to
you
know
we
significantly
refactor
the
kind
jobs
because
I
because
I
know
we
need
that
fix,
or
at
least
I
thought
we
did,
and
once
we
have
that
fix
it
will
make
sense
to
just
rewrite
them
to
not
use
bootstrap,
not
use
scenarios
not
use
our
own
script
and
go
all
in
on
just
a
pile
utilities.
Job
that
invokes
cube
test
directly,
which
is
a
lot
more
manageable
and
up
to
date,.
C
A
A
A
C
C
A
C
A
C
A
fan
yeah,
especially
if
new
in
the
new
contributors,
like
it's
massive
amounts
of
bomb
I,
wanted
to
ask
a
quick
question
and
we
should
probably
end
it.
You
know
the
github
client
in
this
infra.
It
has
a
function
for
fetching.
The
committee
in
the
PR,
like
a
list
at
least
be
ours
for
Liscombe,
is
for
a
PR,
or
something
like
that,
because
this
function
do
happen
to
know.
If
it,
the
commits
in
a
logical
order.
A
We've
had
a
big
problem
with
this
and
throughout
their
API,
and
we
have
some
other
code
that
will
try
to
actually
find
the
head
commit,
but
I
don't
think
we
have
code
that
actually
gives
you
the
the
tree
sorted
order
currently
or
like
the
lot
illogical,
commit
order.
Instead,
most
of
github
xapi
is
sort
based
on
the
author
date,
which
is
probably
the
least
useful
way
to
sort
it,
because
that's
just
the
original
date
the
commit
was
created.