►
Description
Nicholas Lane and the Cloud Native Community celebrates the release of Kubernetes 1.14.
They cover a lot of great new updates, such as:
Nicholas Lane - Pod Presets in Init Containers
Duffie Cooley - Walk through of the changes around Hardening cluster role bindings and "kubectl auth can-i --list"
John Harris - Kustomize integration
Nicholas Lane - Durable Local Storage Management
Duffie Cooley - Show new kubeadm feature that will copy certs securely between control plane nodes.
Check out what's new in the Kubernetes 1.14 release in our blog post by Stephen Augustus here:
https://blogs.vmware.com/cloudnative/2019/03/25/kubernetes-1-14-windows-node-support-and-cluster-api/
A
About
it,
but
think
of
the
effort,
it
would
take
to
kind
of
migrate
everything
into
Windows
the
retesting,
the
validation,
all
the
assumptions
that
were
made
in
the
master
component
that
they
run
on
Linux.
They
use
Linux
file
permissions.
They
use
Linux
to
keep
your
idiosyncrasies
all
of
those
who
didn't
have
to
be
ported
massaged
the
little
bit
and
then
tested
across
the
board.
We
thought
that
you
know
being
cloud
native,
and
you
know
looking
at
all
the
companies
out
there.
A
They
have
a
lot
of
mixed
workloads,
they
have
Linux,
they
have
Windows
workloads,
I
made
a
lot
of
sense
to
say.
Let's
start
just
Windows
notes
now
enable
folks
that
want
to
bring
their
dog
Net
Applications
into
the
world
of
kubernetes.
To
do
so,
and
if,
in
the
future
there's
a
lot
of
customer
demand
to
port
the
entire
ecosystem
into
Windows,
then
you
can
explore
that,
but
that
was
kind
of
our
initial
thinking
and
it
took
us
long
enough
to
get
the
nodes
to
work
on
Windows.
B
Now
we
can
see
more
people
adopting
the
kubernetes
workflow
and
making
it
work
for,
like
their
applications
and
I
think
that's
actually
really
neat.
Certainly,
it
introduces
some
interesting
challenges
for
those
of
us
they
would
have
to
support
kubernetes,
but
so
be
it.
You
know
we're
all
up
for
a
challenge
right
and.
C
There's
an
interesting
question
on
the
chat
from
Eric:
he
says:
will
it
be
100%,
Linux
or
100%
Windows
workers
or
will
mix
work
and
no
it's
be
supported
with
clusters,
so
can
I
have
a
pasta,
would
like
a
Linux
control,
Blaine
and
five
minutes
while
I
get
some
five
Windows
like
I?
Think
you
can?
Yes.
A
You'll
be
able
to
so
we
will
support
heterogeneous
environments
where
you
have
both
Windows
and
Linux
working
together.
As
long
as
you
choose
the
same
networking
backplane
wear
that
flannel
or
or
wind
bridge
or
l2
bridge,
so
pick
the
networking
architecture
that
will
work
both
on
Windows
and
Linux,
and
you
can
have
a
heterogeneous
cluster
that
you
can
have
mixed
workloads
call
emphasize.
B
Excited
about
I
saw
one
thing
that
I
thought
was
pretty
I'm,
actually
listen,
we're
playing
around
with
they're
doing
a
demo,
maybe
doing
a
demo
during
this
is
the
local
storage.
Provisioner
has
been
set
to
GA
I.
Think
that's
really
neat.
If
you're
testing
out
a
simple
application
or
something
that
needs
some
kind
of
storage,
it's
nice
not
to
have
to
worry
about
setback
shaving,
the
yak
and
setting
up
like
all
I
could
do.
B
I
need
to
set
up
a
staff
cluster
to
get
precision
blowing
in
the
work,
and
you
know
open
DBS
is
my
choice
for
seven,
like
all
these
extra
steps,
if
you
can
just
say
like
just
give
me
some
storage
and
make
sure
that
it
persists
somewhere
and
I
can
get
it
in
the
next
time.
I
think
that
really
helps
out
the
workflow
and.
C
That
definitely
like
one
of
the
people-
that's
really
gonna
benefit
I.
Think
is
how,
like
so
many
of
the
stable
chops
eschewed
like
I,
want
to
set
up
my
sequel,
a
wordpress
or
something
everything
relies
on
a
PVC
or
persistent
volume.
Then
you
gonna
show
that
like
and
if
svm
same
time
and
set
it
off
and
I
was
like
setting
out
of
a
good
script
set
up
like
three.
B
If
you
set
up
like
you,
know
a
persistent
volume
type
and
then
give
it
the
information,
but
the
information
that
you
need
for
local
sort
is
just
the
mount
point,
but
also
the
node
that
it's
going
to
be
running
on.
Okay,
so
it'll,
look
like
node
select.
You
can
send
up
that
way.
It
doesn't
have
to
be,
but
you
can
say,
like
I,
want
this
persistent
volume
like
cache,
essentially
to
come
from
this
note.
Okay,
so
that
you
can,
you
know
the
pod
restarts
and
gets
recycled.
It
can
so.
B
And
that's
basically,
it's
a
pretty
simple
mechanism.
It's
not
something.
I
got
a
chance
to
play
around
with
too
much,
but
it
seems
pretty
straightforward
and
that
was
really
neat
anything
out
there
in
the
public.
Other
you're
excited
for
/
114
ain't,
any
new
feature.
It's
that
early
alpha
or
beta,
that
you're
really
excited
for.
B
Josh
piroso
and
he
was
very
excited
about
the
cube.
A
DM
client
sharing
feature
I,
think
I'm,
remembering
that
improperly,
but
the
ability
to
share
a
committee
and
certificate
across
the
control
plane
in
a
in
a
mechanism
that
makes
sense
for
cube
a
DM,
which
means
the
cube,
idiom,
HEA
creation
or
initialization
work
clover,
feeling
any
times
better.
I'm
super
aside
for
this.
C
B
Please
do
so
if
everyone
was
like
for
is
that
blog
he's
an
awesome,
dude
super
pumped
up
and
positive
about
a
lot
of
things
including
day,
and
he
just
jumped
on
this
and
wrote
a
blog
post,
but
it
really
quickly.
I
am
this
is
awesome
and
just
what
we
needed
if
you
use
cube
ATM
like
I
do
this
is
like
game-changing.
C
B
One
thing:
that's
kind
of
a
known
issue
like
so
I.
Don't
want
this
just
to
be
like
oh
great
one
for
teens
out.
It's
amazing
there's
no
problems
with
it
whatsoever.
This
is
an
exploration
of
the
release
and
you
know
if
you
looked
at
the
release
notes
you
see
that
there's
a
core
DNS
issue,
if
BP,
if
API
server
shuts
down
while
accordion
s
is
connected,
it'll
crash
accordion
s,
which
kind
of
poses
your
entire.
B
B
It
looks
like
yeah,
so
yeah
trying
to
do
API
circles
as
long
as
stable
as
possible.
I
know
that
this
is
cognitive
and
everything
should
be
shifting
sands,
but
this,
hopefully,
will
be
remediated
soon.
Since
it's
a
known
issue,
then
part
of
the
release,
it
hopefully
will
be
patched
pretty
quickly.
We'll
have
a
working
up.
One
release
out.
C
So
I
found
the
deprecation
stuff,
so
basically
they've
now
set
a
timeline
I
think
by
default
and
116
they're
gonna
deprecated
extensions,
v1
beta
1
that
Network
policy
you're
gonna
need
to
migrate
to
networking
v1,
there's
a
whole
bunch
of
deprecations
that
have
been
set
for,
like
I,
think
116,
maybe
118,
and
some
of
these
know
what
116
yeah,
118
and
there's
a
way.
Now,
if
you
change
the
runtime
config
flag
on
the
API
server,
you
can
actually
just
save
those
API,
so
something's
false
now
in
your
lower
environment.
C
So
it's
a
good
way
of
like
flushing
out
all
the
API
is
gonna
be
deprecated,
so
you
go
through
make
sure
your
the
amount
of
fester
all
updated
using
the
new
API.
So
that's
pretty
cool
I'll
paste
the
I'll
place.
The
line
you
can
add
to
your
API
server
in
the
zoom
chat,
I.
Think
the
original
node,
the
original
over
some
Jordan
Leggett
and
the
the
community
Channel
I
think
that
I'll
throw
it
in
there.
Okay,
Sarah
yeah
I
mean.
B
Big
Ups
to
Jordan,
by
the
way
so
I'm
gonna,
be
talking
about
the
release
process
in
a
little
bit,
but
Jordan
is
part
of
the
release
all
up
and
down
he's
filing
bugs.
If
she
is
doing
p,
oz
is
a
superhuman
and
yeah
kudos
to
him.
I,
don't
know
how
I
can't
imagine
a
kubernetes
community
or
just
Cooper
dies
yourself
working,
particularly
as
well,
without
something
like
Torben
involved,
so
kudos
to
him.
He's
awesome.
So
something
interesting
us
on
the
release.
B
Notes
is
so
some
things
are
getting
deprecated
totally
and
they're
just
been
full-on
removed,
so
the
just
show
all
flag
and
cube
CTL
get.
This
is
not
the
same
as
cube
CT,
you
know
get
all
or
anything
that,
but
just
show
all
has
been
totally
removed
and
experimental
fails
swap
on
flak
has
been,
has
been
completely
removed,
but
it
show
old.
B
That
bomb
export
is
now
gone
so
export,
if
you
don't
know,
is
kind
of
like
doing
a
get
get
oh
yeah
no,
but
it
removed
some
of
the
unnecessary
fields
like
uib
or
creation
timestamp,
something
that
you
might
want
very
like.
Oh
I
want
to
template
eyes
this
in
an
easy
fashion.
That's
now
gone
directly,
but
I
think
not
a
lot
of
people
knew
about
it
or
to
use
it.
So
that's
what
it's
got
right.
C
A
B
D
I'm
psyched
about
it
so
I
mean
the
biggest
thing.
Obviously,
with
our
role
around
being
field
engineers,
you
know
we're
oftentimes,
helping
people
build
orchestrations
around
Cuba,
atom,
cube
admin
and
ensuring
that
they
can
bootstrap
kubernetes
in
a
in
a
really
clean
way,
and
the
reason
that
I
think
we're
so
psyched
about
it
is
just
because
it's
always
been
a
bit
of
a
headache
to
worry
about
PKI
and
have
to
worry
about
moving
certificates
and
keys
around
different
hosts.
When
bootstrapping
no
yeah
I
mean
I,
don't
really
have
a
ton
to
say
about
it.
B
D
But
it's
cool
because,
aside
from
those
of
us
who
are
orchestrating
cue
Badman,
you
know
I
think
it
has
a
lot
of
benefit
for
projects
that
are
in
flight,
so
cluster
API
and
maybe
even
cube
spread.
You
know
that
I
haven't
really
looked
at
cube
spray
in
a
while,
but
I
would
imagine
it
could
benefit,
or
at
least
simplify
some
of
the
orchestration.
D
B
Thank
you
for
that,
so
something
I
just
noticed
and
I.
Don't
know
why
I'd
never
noticed
before
the
release
notes,
but
something
of
interest
to
me
is
that
we
have
a
bunch
of
deprecated
metrics,
so
Hubert
hoody
is
10
about
cuber
Nettie's.
Community's
no
longer
is
supplying
the
metrics
specified
in
that
list
and
that's
something
important
to
pay
attention
to
like
monitoring
and
observation
are
important
to
the
use
of
your
cluster
and,
if
you're,
using
some
of
these
like
archaic
or
older
metrics
and
they're
gone
now.
That's.
B
If
I'm,
looking
for
really
since
I
want
to
see
the
cool
stuff
like
what's
new,
what's
stable,
what
should
I
watch
out
for
I?
Guess:
that's,
not
really
cool
but
whatever
I
never
really
cared
about
metrics
before
recently
and
I've
been
getting
into
like
the
observation
sort
of
thing
and
I
think
that
that's
pretty
handy
something
I'm
going
to
keep
watch
for
in
the
future.
None
of
them
really
stand
out
as
particularly
like.
Oh
crap,
this
very
useful
one
is
being
gone,
they're
like
being
removed.
B
B
B
It
has
been
updated
in
like
20
days
or
in
some
cases
four
months
what's
going
on
and
having
to
go
through
that
during
the
entire
cycle,
if
you're
interested
in
contributing
to
the
kubernetes
community
and
you're
not
exactly
sure
how
the
release
team
isn't
awesome,
I've
been
using
covered
enemies
for
a
couple
years
now,
and
this
is
the
first
release
I've
contributed
to,
and
it
was
a
great
way
to
get
involved
in
the
community
to
get
my
name
up
there,
but
also
just
see
other
people's
names.
There's.
B
No,
who
else
is
out
there
who's
interested
in
this
and
is
working
on?
It
is
dedicating
like
three
months
of
their
lives,
essentially
working
a
second
job.
I
was
very
good,
but
there
are
you
know:
Ed's
inflows
there.
Sometimes
I
wasn't
doing
a
whole
lot
in
this
a
week
where
I
was
working
like
16
hours,
but
part
of
that
was
to
give
all
of
us
a
stable
and
useful
release
of
kubernetes,
so
I
employ
everyone
to
join.
We
are
actually
looking
for
shadows
for
the
115
relief.
Now,
if
you're
interested,
please
join
us
it
release.
C
A
B
To
select
you,
what
are
you
interests
in
kubernetes
blah,
blah
blah?
It
just
runs
through
that.
It's
a
form
that
even
Augustus
one
of
the
PMS
of
sake,
release
put
together
for
114
and
it's
being
like
worked
on
a
sign,
though
so
115
is
the
second
time
you've
done
it
and
it
seems
like
it's
going
pretty
well
so
I'm.
Actually,
the
bug
triage
lead
for
one
Jean
partially,
because
I
was
very
interested
in
continuing
my
involvement,
but
also
nobody
else
really
wanted
to
do
it.
That's.
C
B
B
He
had
a
bet
think
about
six
shadows,
Veronica
yeah,
so
he
had
a
decent
work
force
to
work
with,
but
also
a
bunch
of
people
to
kind
of
like
finagle
right,
I
need
a
very
good
job
of
it
and
he
worked
as
a
shadow
on
bug,
triage
and
then
the
lead
for
two
releases
and
then
when
he
because
I'm
I
want
to
move
on
to
another
part
of
the
release
team.
Yes
offends
anybody
else
want
to
set
up
and
do
it.
B
B
Basically,
for
the
first
part
of
it
not
everything's
been
milestones
yet
and
so
we're
waiting
to
get
like
all
the
enhancements
in
place
and
all
the
issues
associated
with
that
and
then
we
win
bug
for
yeah
code
freeze
is
coming.
That's
when
the
book
shows
that
sets
up
at
its
game
and
then
we
have
a
list
of
tickets
that
are
like
I've,
been
working
on
an
excellent
excellent
of
time,
blah
blah,
and
so
we
just
go
through
all
these
tickets
points
of
the
game.
B
B
It's
a
you
know
making
sure
that
the
right
people
are
looking
at
issues,
the
right
people
looking
at
PRS
and
so
that
everything
that's
identified
at
114
ahead
of
code
thaw
is
in
place,
because
if
your
PR
is
outstanding,
when
code
ba
happens,
it
could
get
lost
in
the
shuffle
all
the
other
things
that
have
been
identified.
That
can
go
into
that
police.
So
we're
trying
to
get
all
that
in
and
it's
a
lot
of
like
like
I,
said
a
lot
of
communication,
a
lot
of
bugging
people
kind
of
being
annoying.
C
B
Interesting
I,
it's
hard
for
me
to
describe,
but
basically
code
freeze
like
we're
not
accepting
any
code
or
any
PRS
at
this
time.
Like
once
the
code
freeze
happens,
we're
like
we
need
to
make
sure
that
other
things
stable
after
a
code
freeze,
happens
code
thaw
can-can,
like
once,
we've
kind
of
like
got
a
release,
that's
big,
fairly
stable,
the
rest
of
the
peers
can
come
in
and
get
tested
against
that
baseline.
Okay,
cool
is
my
understanding
of
it.
I
may
be
why
it
will
be
wrong,
and
that
would
be
a
big
problem.
B
A
A
B
Wasn't
really
sleep
yeah,
so
there
are
a
lot
of
different
to
use
that
go
into
the
release.
There's
communication,
which
our
buddy
George
George
castrum
of
VMware,
is
the
leader
of
communications
for
115.
There's
communication
bug
triage
I'm.
Doing
this
all
there's
the
release
team,
who
are
like
the
heads,
the
PM's
of
the
release,
enhancements
enhancements,
CI
signal
release.
B
At
QA,
no
is
no
test.
Tes
infra,
I
posted
so
testing
for
us,
so
those
are
like
the
main
ones.
I
might
be
forgetting
some,
but
if
you're
interested
in
any
little
Saints.
That's
in
for
communications
interesting
when
I
go
back
to
monitoring
and
observation.
They're
gonna
limp
you're,
like
testing
out
how
the
builds
are
running
and
they
have
a
really
complicated
and
interesting
grid
for
tests.
B
B
B
B
B
Something
so
something
of
interest
to
me
that
happened
during
the
release
cycle
was
that
going
112
is
now
the
updated
version
of
go
earth
climbing
go.
Is
that
112?
The
I
probably
need
there
really
smells
right.
Is
that
me
that
is
now
the
de
facto
standard
for
114,
which
is
actually
a
big
change
during
the
release
process?
There
was
a
lot
of
communication
and
discussion
around
whether
or
not
we're
going
to
stick
with
111
or
go
to
one
fall.
B
B
Otherwise,
we're
going
to
stay
on
actually
deprecated
back
to
111,
so
some
issues
wind
or
bubble
up
through
the
process
of
114
after
we
like
read
everything
to
112,
and
it
was
a
discussion
about
what
it
would
take
to
go
back,
because
if
the
issues
we
are
facing
with
112
were
bad
enough,
they
may
like
forces
to
get
back
to
111
to
face
the
problems
that
we
have
with
111.
Again,
but
at
least
you
know
it's
like
demon
double
you
know
versus
the
devil.
You
don't
know
and
there's
a
lot
of
discussion
around
them.
B
A
So
so
so
far
bugs
it's
usually
at
the
cuts
all
time.
That's
when
you
have
to
go
in
front
of
the
Securities
and
get
permission
to
get
your
ticket
cherry-picked
for
everything
else,
including
enhancements.
They
have
different
freestyle.
They
have
different
times
that
this
happens,
but
usually
called
freeze
in
Michael
to
join
us.
A
A
So
so,
in
order
to
run
dass,
you
still
need
this
old.
Oh,
they
had
like
a
some
virtual
dose
manager
thing
that
was
an
NT
I.
Don't
think
that
actually
works
in
a
container.
So
that's
that's,
not
gonna
work
Nikolas.
That
brings
me
back
to
2004
or
something
when
virtual
machines
were
a
thing
and
someone
wanted
to
run
Microsoft
Bob
in
a
via.
A
B
A
C
A
A
And
we've
also
been
talking:
I
forgot,
his
name
I
think
it
was
John
or
Josh
Baldry
on
the
sig
windows.
Channel
I'm,
like
he's,
got
a
full
environment
set
up
where
they're
you
know,
spinning
up
test
databases
for
each
one.
Their
test
passes
under
kubernetes,
and
so
his
devs
get.
You
know
like
that.
Nice,
clean
sequel
environment
without
having
to
call
up
a
DBA
to
set
up
each
time.
B
A
B
A
And
so
I
think
we're
going
to
be
seeing
a
lot
more
of
that
coming
up
pretty
soon,
especially
as
there's
more
to
put
more
people
doing
deployments.
You
know
with
through
the
cloud
providers,
because
you
know
like
Microsoft
and
Google,
both
published
scripts
for
how
to
set
this
up
and
run
it
within
within
VMs.
A
A
Thing
is
I
think
we
were
confused
on
the
timing
because
I
usually
do
like
you
know.
This
is
us
doing
this
for
the
first
time,
I
usually
do
like
about
an
hour
and
a
half
forty
gik
and
then
and
then
we
said
3:00
o'clock,
forty
tik
but
I
think
everybody's
in
different
time
zone.
So
nobody
knows
what
the
hell's
going
on.
A
E
C
B
E
B
E
E
So
this
is
command,
can
I
and
with
this
command,
I
can
do
things
like
can
I
get
pause
and
I
can
understand.
It's
like
from
our
Beck's
perspective,
what
capabilities
I
have
as
a
user
and
so
prior
to
this
change?
If
I
wanted
to
actually
understand
all
of
the
capability
I
had
I
would
have
to
actually
kind
of
go
through
and
iterate
through
all
the
permissions
through
all
the
things
to
be
able
to
understand
what
permissions
I
had
and
so
like
this
is
the
this
is
a.
E
This
is
an
example
of
cube
boss,
futile
auth
can
I
and
then
I
just
assumed
a
system
service
account
by
using
this
as
flag,
and
what
and
I
created
a
service
account
in
the
default.
Namespace
called
admin
and
I
bound
it
to
a
cluster
role
of
cluster
admin,
and
so
it
has
these
two
lines
up
here
at
the
top
for
permissions,
which
basically
mean
I,
can
do
anything
to
this
cluster.
This
is
like
full-on
everything
permission.
E
This
is
god
mode
for
the
permissions
right,
but
if
I
look
at
like,
for
example,
but
the
default
tooken
or
service
account
that
it's
just
generated
whatever
the
default
namespace
is
created,
I
can
see
a
very
different
output
right.
I
still
have
some
of
this
discovery
stuff,
because
I'm,
an
authenticated
user
but
I,
don't
have
that
kind
of
god
mode
permission
that
you
see
in
these
first
two
lines
but
prior
to
this
change,
when
you
were
trying
to
understand
okay.
Well,
how
much
permission
does
the
admin
user
have
like
what
specific
permissions
do
you
have?
E
You
would
have
to
go
through
and
evaluate
each
of
them
role,
bindings
that
might
associate
with
that
user
and
kind
of
like
poke
holes
in
it
until
you
finally
figured
out
exactly
what
the
right
permissions
were.
This
is
the
way
I'm.
Just
saying
just
show
me
show
me
all
the
permissions
that
this
user
has,
which
I
think
is
really
cool,
and
it
represents
also
an
API
change
in
client.
Go
prior
to
this
we
had
a
thing
called
self
subject:
access
review:
oh
man!
What
happened?
E
Okay,
go
doc
here
we
go
okay,
so
part
of
this
change
in
in
authorization.
We
had
this
thing
called
self
subject:
access
review,
which
is
what
a
lot
of
things
are.
Subject:
access
review
what
a
lot
of
things
used
to
kind
of
determine
like
what
the
permission
model
within
our
back
so
like
if
you're
a
user
and
cube
Kindle
is
trying
to
you
know,
do
it
get
pods
or
create
a
deployment
or
something
like
that?
E
One
of
the
early
calls
in
that
process
is
to
determine
whether
you
have
the
purp,
the
Associated
authorization
to
do
that,
and
this
is
the
call
that
is
actually
used
to
determine
that
this
new
capability
when
you
do
list,
this
is
actually
a
function
of
this
commit
this
for
this
new
code.
That's
just
actually
landed
in
here
which
allows
you
to
do
a
rules
review
which
enumerates
a
set
of
actions
that
the
user
can
perform
within
a
namespace.
So
it'll
just
come
back
and
say:
here's
all!
E
E
But
what
this
is
doing
is
it's
basically
hardening
the
kind
of
the
default
Quebec
discovery.
Cluster
rule
binding
and
I'll
demonstrate
this,
but
right
now,
if
you
have
a
if
you're
within
the
scope
of
the
cluster,
you
actually
like
it
as
an
unauthenticated
user
against
the
API
server,
actually,
not
even
within
the
scope,
just
any.
If
you
can
reach
the
API
server
at
all,
you
can
get
to
a
bunch
of
discovery,
information
and
kind
of
learn,
information
about
the
cluster.
E
For
example,
if
I
do
cube,
Ketel
API
resources,
I
can
see
all
of
the
objects
that
this
Kuster
actually
exposes,
and
so
in
this
case,
if
I
were
to
set
up
like
a
new
custom
resource
definition
or
anything
else
like
that,
I'd
be
able
to
see
that
in
this
list
and
those
things
are
exposed
to
on
authenticated
users,
even
if
I'm,
not
even
if
I'm,
not
using
that
you
credential
I,
can
do
that
and
so
to
show
that.
But
quick,
let's
just
jump
in
here.
E
So
here's
my
so
what
I'm,
so
I
just
started
up
like
a
little
bash
container
inside
of
here
and
I'm,
going
to
use
curl
to
interact
with
the
internal
representation
of
the
API
sugar,
and
you
know
114
cluster
when
I
try
that
I
get
this
the
following
result
says
system.
Anonymous
cannot
get
the
api's
path.
But
what
happens
if
I
try
this
on
a
113
or
before
cluster.
E
The
idea
was
that
you
would
want
to
provide
this
and
I'm
not
saying
it
in
a
way,
so
that
applications
could
determine
what
to
ask
for
like
do
some
discovery
within
the
kubernetes
cluster,
because
community
is
at
its
heart
is
really
a
collaboration
platform
like
we
want
to
be
able
to
bring
up
many
services
and
allow
them
to
discover
and
make
use
of
each
other,
and
so
there
were
a
lot
of
decisions
made
early
in
the
process
to
kind
of
an
able
that.
But
what
we've
realized
lately
is
that
you
know
now.
E
You
actually
have
a
surface
account
kind
of
on
by
default.
You
there's
no
reason
for
you
to
ever
be
on
authenticated
right,
and
so,
when
we're
doing
discovery,
you
could
just
use
this
account
to
actually
authenticate
to
the
API
server
and
get
this
discovery
mechanism
rather
than
assuming
that
I
have
to
expose
that
to
the
world.
I
can
just
expose
it
to
those
users
or
to
or
to
those
applications
that
are
authenticated.
This
is
a
pretty
good
improvement
generally
in
the
security
or
the
default
security
of
your
company's
clusters.
E
B
A
B
A
A
E
Or
there's
a
reason
who
am
I,
but
there
there
is
like
there's
some
work
being
done
across
this
thing
called
who
can
but
yeah
from
the
who
my
perspective,
there
still
isn't
a
hook
that
tells
you
that
I
think
they
kind
of
want
to
keep
that
is
it
I
mean
if
you
think
about
it
from
kubernetes
perspective,
you're,
either
system
authenticated
or
system
on
dedicated
there's?
No,
they
don't
have
an
idea
of
the
user
per
se
once
you
get
into
it
right.
That's
just
gonna
map
to
the
permission
spit.
A
E
A
E
B
A
A
D
B
E
Well,
many
keeps
a
little
different
in
only
because,
like
sometimes
many
people
actually
exposed
an
unauthenticated
in
points
which
means
that
when
you're
trying
to
do
these
commands,
it's
it's
bypassing
auerbach
effectively,
and
so
you
just
have
to
be
sure
that
the
endpoint
that
you're
using
is
actually
the
are
back
enabled
one
or
the
secure
port
on
me.
Cube.
A
E
B
Yeah
I
recommended
in
docker
nah
I,
don't
want
to
disparage
me
to
keep
it
all
I
think
it's
awesome
for
what
it
needs
to
do
and
actually
kind
of
exposed
to
some
things.
That
kind
doesn't
do,
but
since
I
was
introduced,
I've
been
using
kind
non-stop,
I,
love,
kind,
I,
highly
recommend
it
kind
of
cool.
C
Cool
so
Dom,
yes
yeah,
so
some
of
the
things
I
thought
I
would
walk
through
just
the
new
cube,
CTL
updates,
I
think
probably
the
most
controversial
change.
If
the
114
release
was
that
we
now
know
cube,
CDA
was
pronounced
to
cuddle
up
because
of
the
new
logo.
Wait,
what
I'm
going
to
show
you
so
coop
CTL
has
a
has
a
whole
bunch
of
I
know
I'm
personally,.
C
B
A
C
So
yeah,
so
this
this
Docs
page
is
really
cool.
On
the
left
hand,
side
you
got
the
menu
and
all
the
different
kind
of
things
you
can
do.
It
keeps
yeah,
but
one
of
the
cool
things
that
they
emerged
into
y'all
now
is
functionality
for
a
tool
called
customized
I
shouldn't
played
with
customized
before
I
was
gonna.
Do
this
demo
customized
is
pretty
cool.
So
when
we're
working
with
kubernetes
and
manifests,
you
know
we
can
get
a
lot
of
stuff
from
upstream.
C
We
might
be
using
home
where
you
know
the
Charles
come
for
us
and
we
can
override
certain
values,
but
sometimes
we
need
to
still
modify
that,
though
we
want
to
apply
our
an
overrides.
You
know.
I
personally,
have
spent
a
whole
bunch
time
messing
around
with
said
and
copying
and
pasting
and
all
that
kind
of
stuff.
So
doesn't.
My
is
a
really
cool
tool
where
you
can
define
your
overrides
and
find
kind
of
like
a
hierarchy
of
llamó,
and
then
you
can
override
some
of
those
pieces.
C
You
can
apply
this
to
a
cluster,
so
just
gonna,
and
now
that
function,
I
used
to
be
a
standalone,
zeolite
tool,
and
now
some
of
the
functionality
is
actually
baked
into
system
to
walk
through
and
you
can
customize
still
serve
a
repo.
So
if
you
don't
get
hub,
you
go
to
hell.
Calm.
These
six,
slash,
customize
or
the
code
still
lives
in
there
and
some
good
examples
on
there.
But
I
just
wanted
to
walk
through
using
coop
CTL
now
and
on
14
and
some
of
the
things
you
can
do
with
customize.
C
A
cyclic
clustered
sound
so
at
least
four
examples.
So
let's
take
a
look
at
a
simple
one,
so
I
can
simplify
my
deployment,
so
you'll
add
appointment
needed
just
a
static
community's
deployment
and
I
could
define
some
modifications.
I
want
to
make
to
this,
and
I
can
use
a
creamsicle
use.
Customized
merge
these
together,
but
now
I
can
use
qtr
to
do
that.
So,
if
I
look
at
my
customization
Yama,
basically
what
cost
customize
allows
you
to
do?
Is
it's
got
some
top-level
kind
of
instructions.
C
So,
if
I
so
again,
with
this
resources,
key
I
tell
it
which
resources
I
want
to
override
so
I'm
gonna
tell
it
my
deployments
or
yarmulkes
in
this
directory.
The
namespace
directory
allows
me
to
override
name
space
for
all
of
the
resources
that
I'm
talking
about
I
can
give
a
prefix,
so
there's
a
prefix
and
a
suffix.
So
in
this
case
let's
say:
I
want
to
prefix
all
of
my
resources.
This
could
be
deployments.
Also.
C
Do
the
self
resources
like
card
stock,
except
I,
can
say
my
config
Maps,
whatever
services
and
I
want
to
prefix
them
with
a
simple
demo
and
I
can
also
add
common
labels
to
a
set
of
resources.
So
I
right
now
have
two
labels
on
here
and
my
deployments
called
nginx
its
customization
by
Gamal.
So
now
what
I
can
do
is
cube
CTL,
and
this
time
is
a
new
flag
called
K,
and
now
I
can
just
point
to
that
directory,
which
is
simple
and
I'm
gonna.
C
C
Second,
you
use
a
customized
soft
Montacute
ETL,
but
we
can
see
what
it's
done
is
in
our
labels.
It's
got
the
original
to
the
room
there
and
it's
added
up
common
labels.
So
app
example
an
f-test
and
it's
got
a
head
and
created
this
name
prefix.
So
I
go.
If
I
take
a
look
at
the
name,
it's
got
the
existing
one,
which
was
in
genetics,
prefixed
it
with
simple
demo,
and
it's
changed
the
namespace
to
my
app
and
s.
This
is
like
super
simple
functionality
of
customize.
C
And
then
you
can
just
give
it
the
directory.
This
works
there
we
go.
So
that
does
the
same
thing.
So
if
you
don't
want
have
to
do
an
apply,
dash
K
and
give
it
the
dry
run,
oh
yeah,
no,
you
can
do
cube
CTL
customize
and
they
just
give
it
a
build
directory.
So,
let's
think
of
a
slightly
more
complex
example:
let's
go
take
a
look
at
this
of
merge
directory,
so
this
one
where
one
might
want
to
do
a
few
more
changes.
So
the
customization,
though
Yama
file,
gives
us
some
top-level
commands.
C
I've
changed
the
name,
space
change,
the
prefix
and
labels
and
stuff
like
that,
but
we
might
want
to
actually
change
some
arbitrary
fields
in
the
animal.
So,
let's
take
a
look
at
a
deployment
again
kind
of
the
same.
This
time,
I
don't
have
a
replicas
set
in
here
and
I've
got
some
limit
set
for
my
nginx.
C
We
going
to
take
a
look
at
my
customization
llamó
and
I'm,
telling
it
go
ahead
and
operate
on
this
deployment.
Llamo,
luan
I
just
showed,
and
this
time
I
use
this
patches
strategic,
merge
directive
to
say:
go.
Look
at
this
patch
Toyama
file
and
then
go
calculate
a
merge
between
that
patch,
tamil
and
my
deployment
and
then
go
ahead
and
apply
that.
C
So,
if
you
take
a
look
at
my
patch
tamil
I'm,
defining
a
similar
structure
but
I'm
just
defining
the
keys
that
I
need
at
the
top
level
and
what
I'm
gonna
do
is
I,
we
can
see
I'm
gonna
override
the
number
of
replicas.
So
it's
gonna
add
this
key.
So
I
had
breakfast
three
I'm
gonna
override
the
limits
for
the
limits
in
the
request
of
CPU
being
seen
the
original
one.
It
was
no
point
to
know
by
one
we
go
to
patch.
C
We
can
see
I'm
gonna
override,
that's
one
and
this
time,
if
I
do
my
group
CTL
apply
use
customize
gonna,
give
it
my
timers
merge
directory.
Now,
if
we
look
at
what
spit
out,
we
can
see
it's
my
original
top-level
deployment,
but
it's
added
in
replicas
three
and
it's
modified
my
request
to
patch.
This
is
just
a
simple
way
that
I
can
override
some
more
arbitrary
values
rather
than
using
those
top-level
directives
in
their
customized.
C
So
now,
let's
look
at
a
more
complex
example:
I
look
at
overlays,
so
overlays
in
customize.
Allow
you
to
define
a
hierarchy,
so
I
can
have
like
a
base
set
of
the
animal
that
I
want
to
use
as
a
baseline
or
a
generic
course.
If
I
was
doing
different
environments
or
different
like
a
canary
release
or
different
zones
and
WS
or
whatever
I
need
to
override
certain
certain
pieces
in
the
animal.
So
I
can
define
my
base,
which
again
is
just
an
appointment.
C
So
you
know
is
this:
doesn't
have
any
replicas
and
it's
back
to
the
original
CP
elements.
I
also
have
a
customization
in
here
and
all
I'm
saying
is
operate
on
deployment
or
Jana
right
now
the
magic
is
down
at
my
overlays
folder.
So
if
I
go
down
to
overlays
and
see,
there's
a
dev
and
a
product
I've
used
some
different
patch
strategies
here,
just
to
show
how
what
you
can
do
if
you
don't
customization
for
my
dev,
but
this
time
I
define
my
base.
C
So
what
I'm
going
to
do
is
I'm
going
to
apply
this
dev
overlay.
So
he's
telling
go.
Look
at
this
face
and
look
at
the
customisation
gamal
in
there
and
operate
on
deployment
ya
know,
but
then
I
want
to
do
a
before
you
remembered
as
a
Patras
strategic
merge.
One
of
the
merge
techniques
I
can
use
when
I
want
to
marshal
this
animal
together.
C
This
case,
I'm
gonna
use
a
different
technique
which
is
patches,
JSON,
so
use
the
JSON
patch
RFC,
six,
nine
zero,
two
I'm
going
to
tell
it
the
target
I'm
just
going
to
tell
it
the
API
groups
or
absentee
one.
The
kind
is
deployment
and
the
name
is
nginx.
Deployment.
Tastes
lie
deployment
up
top
and
in
the
patch
file
I'm
going
to
give
it
a
syntax.
So
I
can
define
any
number
of
additions
to
this.
We
got
different
operations,
so
we
need
to
replace.
C
C
C
To
stay
away
from
it,
so
what
I
can
do
is
just
to
find
this
path
for
it.
So
I'm
gonna
find
spec
template
spec
containers
on
the
list:
zero
resources
limit,
CPU,
so
I'm
targeting
this
resource
limit
CPU.
So
this
value
here
using
that
JSON
patch
much
syntax
and
I,
want
to
override
this
to
be
a
value
of
4.0
I.
Don't
know
whether
that
makes
sense
at
all
so
I'm
in
the
dev.
C
So
this
time
I
want
to
go
ahead
and
do
a
no
overlays
example
overlays
directory
and
then
death
I
go
apply
that
and
we
can
see
it's
it's
gone
ahead
and
change
that
resource
limit
to
for
its
I'm
using
a
JSON.
But
it's
still
using
that
common
base
right,
so
I
can
define
all
my
common
stuff
in
the
base.
Deployment
and
I
can
just
override
with
these
patches.
C
So
if
we
take
a
look
at
my
prod
customization,
which
is
another
overlay
for
going
to
my
customization
I'm
using
the
the
JSON
patch
again,
I'm
just
I'm
talking
to
the
base-
and
you
can
actually
override
multiple
basis,
if
so
so
bases
list.
So
if
I
have
like
three
or
four
different
bases,
I
want
mo
together,
I
can
refer
to
multiple
bases,
and
my
patch
for
prod
is
going
to
be
adding
a
replicas
field.
C
Ok,
so
I'm,
using
a
different
operation
for
the
JSON
patch
I'm,
saying
I'm
gonna
go
ahead
and
add
slash
specs
at
replicas
from
the
head
of
the
llamó
and
I
want
to
change
the
value
to
3.
So
if
I
go
ahead
and
deploy
this
one
so
prod,
it's
basically
gonna
throw
this
out.
But
it's
gonna
add
my
replica
sequel
tree.
C
So
this
is,
they
say,
really
show
introduction
to
like
how
customize
works
customized
existed
before,
but
in
1:14
all
this
functionality
is
kind
of
built
into
cube
CTL.
So
before
people
would
you,
you
know,
customize,
build
output
to
stand
it
out
and
then
pipe
that
into
you
can
see.
Gl
applied
so
it'll
be
applied,
but
now
we
can
just
you
keep
CL
by
okay.
Instead
of
F,
give
it
a
customized
path
and
as
long
as
it
reads
everything
this
one
cool
stuff
as
well,
we
can
do
like
cube
CTL,
diff,
I.
Think.
C
There
we
go
so
I,
give
you
coop,
CTO
diff
k
and
give
it
a
path,
and
this
will
actually
give
me
a
visual
diff
from
my
local
output
of
few
CTL
and
customized
patches
apply
and
the
cluster,
so
look
at.
What's
actually
gonna
get
created,
so
we
can
see
all
these
pluses
like
this
doesn't
exist
on
the
cluster
right
now,
so
it
would
go
ahead
and
apply
all
of
these
pieces
now.
This
does
require
an
API
server
flag.
It
requires
the
like
servant.
C
If
is
switched
on,
switch
server,
dry
ROM,
so
MacGyver
on
is
switched
on,
which
I
think
is
an
alpha.
Ok,
so
I
just
spent
I'm
using
kind
for
this
and
one
full
team.
So
that's
that
flag
is
on
so
you
need
the
server
side
dry
run
flag
applied
for
this,
but
this
is
pretty
cool
when
you
just
if,
like
ok,
what's
up
she's
getting
fired
like
how
sophisticated
is,
what
do
you
want
to
rely
on?
C
B
A
D
A
C
C
A
Right
I
would
love
to
see
for
vanity
those
overlays
I'd
love
to
see
a
mode
where
you
can
essentially
edit
one
of
the
you
can
do
like
a
like
some
sort
of
customized
edit
edit.
The
file
make
whatever
changes
you
need,
and
then
it
derives
the
actual
patch
out
of
that
right.
You
can
actually,
you
know
it
looks
it
does.
The
base
version
knows
your
intended
version.
It
can
then
derive
the
patch
so
now
you're,
not
managing
the
patch
files
directly
yeah,
probably.
E
C
One
of
the
cool
thing
that
customized
does,
which
I
didn't
cover,
which
is
it
has
it,
has
generators
built
in
secrets
and
config
Maps.
So
what
it'll
do?
Is
you
basically
find
your
content
map
with
the
same
kind
of
structure
and
you
define
a
conflict
like
my
convict
maps
called
comedy
Guan
and
in
my
department
I
can
say:
use
conflict
one,
but
each
time
I
generate
the
conflict.
Lll
apply
offender
hash
and
it
appends
the
hash
to
the
name
of
it
in
every
deployment.
So
then
you
can
do
rolling
updates.
C
B
A
C
E
One
thing
I
point
out:
is
it
like
the
to
patch
versions
that
you
showed
the
JSON
path
and
the
patch
with
the
addressing
stuff
yeah?
Those
are
actually
two
capabilities
that
cube
kernel
kind
of
already
has
like
if
you
wanted
a
mini
callate
resources
within
kubernetes,
the
brass
Oh
JSON
pass
expresses
that
same
that's
a
mechanism
and
so
like,
if
you're,
if
you're
familiar
with
the
JSON
pass
inside
of
cube
good
old,
this
isn't
a
big
jump
right.
It
feels
it
feels
pretty.
It
feels
pretty
familiar
yeah.
C
B
C
B
B
They
weren't
scoped
at
all
in
the
code
and
so
I
added
a
medium
sized
little
PR
at
God
attitude,
Kaminski,
mermaids,
114
that
completes
the
source
and
how
many
containers
will
have
the
same
variable
set
in
their
environment
variables
or
what?
However,
you
set
a
pod
preset
the
same
pod
presets
that
affected
the
regular
container
will
also
affect
the
Innokin
path,
and
so
I'll
do
know
that
really
quickly
well
I
figure
out
which
direction
my
mother,
my
mother's,
are
on
my
screen.
B
E
E
B
C
B
So
here
we
go
first
off
something
kind
of
interesting
that
I
had
to
do
to
get
this
to
work,
for
the
demo
is
that
using
kind
kind
does
not
accept
feature
gates
or
admission
controllers
in
the
kind
config
when
you're
setting
up
a
cluster,
there
are
ways
to
patch
a
cube,
a
DM
config
during
the
setup
of
a
kind
cluster,
but
I
couldn't
figure
that
out
and
so
I
did
something
a
little
bit
more
dangerous
and
a
little
bit
an
interesting
anything.
So
I'm
going
to
add
this
other
terminal
here,
really
quick
and
do
dr.
B
Zack
open.
This
is
my
student
password.
Please
don't
steal
now
that
you
can
see
it
actually.
I'm,
kidding
and
I
had
to
do
a
bunch
of
send
commands
and
so
I'm
logged
into
the
master
controller,
which
is
actually
the
only
node
in
this
kind
of
cluster
and
so
kind
when
you
try
to
like
SSH
into
a
kind
node
which
is
running
as
a
dr.
container
or
not
even
SSH
doctor
in
second,
there
isn't
a
vim
or
an
eMac
or
a
nano.
So
the
war
between
what
your
favorite
text
editor
doesn't.
A
B
In
this
plane,
I
found
the
only
way
to
edit
something
was
using
the
said
command.
So
here
I'm
doing
a
said,
using
octothorpe
for
hashes
as
my
reg
ex
delineator
and
I'm,
just
adding
in
underneath
the
TLS
private
keys
section,
a
runtime
config
to
add
the
settings
do
an
output.
One
is
true
and
the
other
one
is.
B
And
the
other
one
is
adding
the
pod
presets
to
the
admission
plugins
here,
and
so
once
you
set
those
changes,
you
can't
have
to
do
it
twice:
it'll
kind
of
all
automatically
reroll
the
API
server
container
and
the
changes
will
be
in
place,
and
so
these
are
some
of
the
things
little
kind
of
hacking
things
you
have
to
do
to
get
trying
to
work
exactly
the
way
you
wanted
to
right.
Now,
it's
not
kind
of
rerolls.
It's
the
that.
B
Over
yeah,
sorry,
but
I
did
call
the
static,
manifest
yeah
yeah,
so
since
planes
that,
since
kind
of
doesn't
have
these
features
in
there
in
its
config
right
now,
these
are
some
of
the
things
you
have
to
do
to
get
some
of
the
fringe
features
of
criminai
dues
work.
So
that's
something
kind
of
interesting
for
you.
If
you're
you're
inclined
to
try
something
new
in
kind,
this
might
be
a
rocky
up
today,
I'm
gonna
kill
that
one
so
really
quickly,
actually
just
to
fill
off
the
kind
config
really
quick.
B
Let's
see
what
you'll
see
here
here
at
the
bottom,
this
is
where
I
tried
to
add
feature
gates
to
cube
ABM.
This
doesn't
work,
so
don't
do
that
I'm,
just
giving
you
a
demo.
What
not
to
do
don't
do
this.
The
rest
of
this
actually
works,
though.
Basically
the
way
that
this
works
is
if
the.
If
these
fields
in
a
cube,
idiom
config
are
templatized,
they
will
be
picked
up
in
this
command.
Future
dates
are
not
accepted
in
the
are
not
templatized
just
yet
there's
an
issue
of
an
in
kind
to
accept
that.
B
Well,
we're
not
there
yet
so
kind
of
an
interesting
thing,
so
you
can
configure
your
entire
kind
cluster
here.
So
kind
kind
is
type
cluster,
that's
a
little
bit
confusing
API
version
kind,
SIG's
and
then
you
can
set
the
control
plane.
So
you
want
one
control,
plane
and
three
workers.
So
that's
how
you
can
said.
B
Control
plane
nodes,
if
you
want
to
have
them
try
and
like
do
an
H
a
thing:
you'll
do
a
little
bit
extra
work,
I,
think
some
of
us
in
the
community
and
maybe
even
on
the
call,
are
trying
to
figure
out
a
kind
H,
a
cube,
ATM
sort
of
thing,
so
that's
kind
of
a
breakdown
of
the
kind
config
file
and
then
looking
at
my
coop
CTO.
Let.
A
B
B
Oh
yeah
I
like
to
put
everything
in
the
ammo
because
I'm
a
lunatic
I
guess
Oh,
Oh,
interesting
I,
didn't
know
that
some
fog
pieces
work.
Okay,
that
was
cool.
That
kind
of
shows
off
how
things
work.
So
I
was
expecting
to
see
the
template.
But
if
you
look
at
the
deployment
that
makes
sense
that
this
is
the
way
this
works,
I
just
completely
spaced
on
it.
So
this
is
my
pod
preset
information,
the
environment
variables
named
Joe.
B
Did
this
work
and
the
value
is
you
know
it
and
it's
in
the
container
and
gosh
darn
it
if
it
isn't
in
the
init
container.
So
the
weight
of
the
pod
case
it
works
is
it
is
an
admission
country
controller
before
the
idea
of
a
mutating
hook
existed,
we
had
like
all
of
them
existed
as
a
mission
controllers
and
they
were
all
kind
of
jumbled
up
together,
and
so
in
this
case
the
pod
preset
is
a
mutating
web.
B
So
if
the
company's
API
server
sees
that
your
pod
is
labeled
with
the
label
that
the
pot
playset
is
expecting,
it
will
then
modify
that
be
like
the
JSON.
That's
getting
passed
to
the
API
server
and
essentially
change
your
deployment.
Whole-Hog
I
actually
forgot
that
it
did
it.
In
this
case,
I
was
kind
of
hoping
to
get
the
deployment
spec
without
any
other
modification.
B
Fabrizio
actually
says:
kinda
already
supports
AJ,
I'm,
sorry
you're
right
as
soon
as
I
was
saying
that
Mike
that
doesn't
sound
right,
I,
remember
some,
like
somebody,
Duffy
I,
think
you
were
working
on
the
ideas
using
the
new
cube,
ATM
certs,
the
certain
flags
that
Josh
was
talking
about
in
his
blog
around
the
H
a
function
there.
That's.
A
B
E
B
I
think
is
pretty
cool
and
it
something
that
this
might
be
interesting
to
revisit.
Joe
is
you?
Can
volume
mount
you
can
search
volume
mounts
for
your
connect
container
now
and
your
regular
container
in
the
exact
same
way,
this
placeis
can
also
change
volumes.
So,
in
the
case
of
way
to
be
back
in
the
day
like
six
months
ago,
you
know
a
long
long
time
ago
we
did
a
tjk
on
decks
and
one
of
the
things
that
we
did
with
that
was
change.
B
This
goes
goes
engine
because
yeah
before
doing
something
like
that,
if
you
want
to
pass
to
a
volume
on
it
can
be
to
you
inuk-chuk
containers,
it's
kind
of
it's
not
that
big
of
a
hassle,
but
it's
not
the
best
thing
in
the
world
either,
because
the
most
kind
of
fits
this
function
as
well,
where
you
can
have
a
the
same
apply
for
indicators
of
readiness
so
to
de-spawn
presets,
and
that's
not
it.
That's
it
for
the
demo
that
I
had
I
wanted
to
do.
Local
storage
and
I
can
kind
of
walk
through
the
config.
B
B
B
And
then,
if
we
go
to
the
storage
class,
really
quick,
just
write,
really
simple
storage
class
communities,
io
/,
no
provisioner
volume,
binding
mode,
wait
for
consumer
wait
first
for
consumer;
basically,
it
won't
bind
at
the
volume
until
it's
consumed,
which
I
always
enjoy,
and
then
PPC
looks
like
this.
So
it's
just
storage
class
name,
local
storage.
This
should
probably
be
volume
here
now
I
think
about
it,
and
then
that's
about
it.
It's
pretty
simple,
pretty
straightforward
for
the
sort
mechanism
it
yeah.
It
works
exactly
the
same.
B
It
I
just
didn't
see
any
like
great
success
for
right
now,
but
I
also
was
trying
to
do
it
kind
of
last
minute.
So,
oh
well,
so
any
questions
on
the
pave-set
stuff
or
local
storage,
or
anything
like
that
or
discussion
there
in
how
do
I
stop
sharing
cool
so
any
thoughts
on
that
or
any
of
the
other
demos.
Okay,
thank.
B
E
Alright,
so
first
I
mean
everybody's,
been
talking
about
kind
of
mission
to
do
a
quick
overview
of
what
that
is
and
what
it's
capable
of,
which
is
actually
pretty
awesome,
so
kind
is
Hoover,
Anita's
and
docker
to
be
fair
before
I
before
kinda
came
along.
That
was
kind
of
our
doing
this,
with
cube
ADM
D&D,
which
is
a
very
interesting
way
of
solving
the
same
problem.
It
provides
all
it
kind
of
hacks.
E
It
is
quite
a
bit
more
of
a
hack
job
of
like
getting
to
where
you
need
to
be,
but
it
was
actually
really
great
for
me
from
like
1/8
all
the
way
up
to
like
recent,
because
in
mine,
in
my
day
to
day
life
I
find
that
I
quickly
need
kubernetes
clusters
and
I
just
needed
to
show
stuff
off
on.
So
you
probably
see
me
use
kind
on
on
TGI
K
and
if
you're
not
aware
of
TGI
K,
you
should
check
out
J
dot
hefty
hefty
IO.
E
You
totally
go
see
that
little
girl
actually
making
sure
that
word
like
talking
about
this
in
both
directions,
bring
that
up
say
that
HEPA
/
t
GI
K,
because
there
you
can
see
a
bunch
of
talks,
one
that
was
actually
just
kind
of
jokes-
that
talks
about
a
bunch
of
different
things
that
work
I'm
playing
with
or
whether
we've
discovered
in
the
industry.
We
do
it
every
Friday
at
one
o'clock
go
check
it
out
very,
very
cool
thing.
I
want
to
show
you
right
now,
though,
is
kind.
E
So
if
you
go
to
a
SIG's,
not
caged
I/o,
which
is
a
fan
of
the
URL
for
kubernetes
things
on
the
github
repository,
slash
kind,
it'll
show
you
the
repository
where
this
is
being
used
kind
is
actually
fitting
into
the
ecosystem,
a
tavern
and
a
variety
of
really
interesting
places,
including
the
ability
to
actually
bring
to
use
kind,
to
bring
up
a
kubernetes
cluster
and
do
testing
on
it.
There's
now
the
capability
within
the
communities
within
the
communities
repo
to
actually
requests
that
kind.
Do
an
e
to
e
test
against
your
particular
merge.
E
So
if
you
know
PR
up,
you
can
actually
request
that
kind,
actually
spin
up
a
temporary
cluster.
You
leveraging
it
and
do
an
easy
e
run
against
your
particular
merge
and
make
sure
all
the
stuff
works,
and
so
that's
kind
of
like
one
of
the
target
use
cases.
I
use
it
for
kind
of
like
a
daily
driver
and
so
I
wanted
to
show
kind
of
what
what
that
means.
E
So
here
is
an
example
of
a
relatively
complex
configuration
of
kind,
which
I
think
is
actually
pretty
interesting.
So
what
this
does
is
kind
actually
lets
me
define
a
three
different
roles.
One
role
might
be
my
control,
plane,
node,
and
that
will
be
one
of
the
masters
and,
if
I
actually
create
three
of
them,
it
will
it'll.
E
Actually
it's
already
wired
up
to
do
the
work
required
to
bring
up
a
cluster
in
H
a
so
it'll
bring
up
all
three
control
plane
nodes,
it'll
copy
the
certificates
over
necessary
to
do
all
that
work,
and
it
will,
you
know,
join
the
sed
members
into
one
SAE
cluster
it'll
handle
all
of
that.
For
you,
one
of
the
other
benefits
is
again
very
similar
to
what
John
was
showing
you
earlier
with.
Customize
is
that
it
has
actually
integrated
customize
and
it
allows
you
to
overwrite
the
configuration
of
your
cube,
ATM
kampf,
leveraging
customize.
E
So
in
this
example,
I
wanted
to
set
my
networking
configuration
just
something
that
would
be
compatible
with
Calico
and
so
without
having
to
actually
modify
calicos
IP
pool
I
just
wanted
to
set
the
pod
subnet
within
my
cue,
medium
or
comp
to
192
168,
0,
0,
/,
16
and
so
I
can
actually
apply
this
config
patch,
which
is
more
similar
to
the
overlay
mechanism
that
John
showed
earlier.
Where
I
can.
Actually,
you
know
just
make
sure
that
I'm
Ankur
the
object
I
want
to
modify
is
cue,
medium
decades,
the
IO
V
1
beta
1.
E
The
kind
is
this:
the
name
is
that,
and
this
is
actually
what
I
want
to
overlay
and
it
will
modify
my
configuration
on
my
behalf.
The
next
thing
I
wanted
to
try
was
actually
I
wanted
to
try
out
IPPs
configuration
as
a
cute
proxy
implementation
inside
of
a
kind
cluster,
and
so
now
I'm
going
to
show
you
I'm.
A
E
Cluster
config
config
name,
it's
kind
one
and
what
happens
here
is
it'll
actually
again
do
the
work
of
standing
up
a
kubernetes
cluster
on
my
behalf,
it'll.
What
each
of
these
boxes
represents
a
single
node.
So,
as
we
know,
as
we
see
from
the
list
here,
I've
got
five
total
nodes.
Three
control
planes,
one
external
load,
balancer
one
worker
I-
should
have
five
boxes
here
that
are
each
being
prepared
and
what's
happening
in
the
background.
Is
that
it's
effectively
creating
those
5.4
containers
and
it's
building
those
docker
containers.
E
But
we
can
see
here
that
we
have
like
five
docker
containers
that
have
started
up.
So
if
you
go
to
kind
and
go
down
here
to
the
view,
documentation
page
kind
actually
has
a
kind
create
cluster
component,
which
will
actually
do
the
work
of
standing
up
the
cluster
for
you,
but
it
also
supports
the
idea
of
create
node
and
create
base
image.
Your
base
image
is
what
makes
up
that
underlying
operating
system.
So,
as
Nick
pointed
out
earlier,
there's
no
vim
already
installed
in
the
operating
system.
E
If
I
wanted
to
modify
that
what
I
could
do
is
actually
grab
the
docker
file
that
is
used
to
create
that
under
base
image,
modify
it
to
add
vim
to
it
and
then
build
my
own
base
image
that
I
would
use
for
kind
and
and
all
of
us
kind
of
built
into
kind
of
the
design
of
kind
right.
So,
if
I
wanted
to
modify
what
the
base
emissions
like
I
could
just
pull
open
the
docker
file
edit
it
and
then,
when
I,
do
kind
of
create
base
image.
E
I
can
pass
the
docker
file
that
I
want
the
base
in
history
based
on.
Then
it
will
create
it.
For
me,
the
same
is
true
actually
for
the
node
image,
which
is
the
kind
of
the
next
layer
kind
thinks
of
every
node
as
two
in
its
two
layers,
the
base
image
and
also
the
the
node
image.
The
note
image
is
actually
just
bundling
in
all
those
bits
necessary
to
create
the
communities
cluster
itself.
E
So,
for
example,
the
node
image
for
the
load
balancer
might
include
the
HK
proxy
configuration
and
the
particular
H
a
proxy
packages
that
would
be
necessary
for
that
particular
image
to
start
or
for
a
master
or
for
a
worker.
It
would
include
cubelet
puke,
it
all
cube,
ADM
and
all
of
those
actual
you
know
images
that
are
necessary.
One
of
the
interesting
things
about
kind
create
no
two
image
is
it
will?
E
Actually,
if
you
already
have
like
the
kubernetes
code
base
checked
out
locally,
you
can
have
it
build
against
your
particular
branch
of
the
cube
ATM
code
bit
or
the
kubernetes
code
base
and
so
cube.
Atm
cubelet,
all
of
those
binaries
that
you're
using
will
actually
be
pulled
from
your
own,
build
rather
than
being
pulled
from
pub.
E
So
from
some
public
build,
this
is
a
great
way
of
actually
doing
like
a
local
testing
for
tbdm
and
it's
kind
of
in
line
with
what
I
was
talking
about
before,
with
like
being
able
to
bring
up
a
kubernetes
cluster
to
to
debug
these
things.
So
if
we
go
back
to
this
kind,
create
cluster
output,
what's
actually
happened
here.
Is
it's
gone
ahead
and
created
the
five
images?
It
started:
the
external
load
balancer,
which
is
just
an
h,
a
proxy
instance
running
locally.
E
It
started
the
control
plane,
it
added
the
other
control
plane
nodes,
and
then
it
started
my
worker
nodes
and
so
and
then
it
gives
me
an
output
of
this
piece
where
I
can
actually
specify
I
can
send
my
cube
config
to
what
the
resulting
can
be
to
config
is
if
I
do
cube,
he'll
get
nodes.
I
can
see
these
nodes
running
at
the
version
that
was
used
to
install
it
if
I
wanted
to
customize
this
again,
all
I
would
have
to
do
is
build
the
kindest
node.
E
That
was
at
my
particular
version,
and
so
that's
actually
pretty
cool
stuff
by
default.
Right
now
kind
uses
weave
as
a
networks.
You
know
that's
why
you
see
all
these
images
it's
ready
when
you'd
get
pods
all
names
be
I
could
see,
we've
being
deployed
across
all
four.
All
four
of
my
notes.
You
can
see
the
proxies
the
schedulers
all
these
static,
pods,
that
cube
ATM,
actually
manages,
and
only
the
cube
ATM
is
doing
all
in
this
configuration.
A
E
True,
all
right,
so
what
I've
got
here
is
again
I
just
kind
of
start
up,
a
Cooper
I
stood
up
a
kind
of
cluster
and
I'm.
What
I've
done
now
is
I've
just
done
a
cube,
ATM
reset
on
all
of
the
nodes,
because
I
want
to
show
off
this
relatively
new
capability
that
just
landed
114
to
manage.
They
manage
one
of
the
kind
of
more
finicky
bits
of
standing
up
at
AJ
cluster
in
an
automated
way,
which
is
very
cool,
and
so
before
we
get
into
like
how
this
part
works.
E
Let's
talk
about
what
the
problem
is
so
right
now
today
with
cube
ATM.
If
you
wanted
to
send
up
an
H
a
cluster,
will
you
do
that
it
should
be
Q
medium
in
under
your
first
master,
and
then
you
would
have
to
kind
of
go
through
this,
like
manual
certificate
distribution
mechanism,
where
you
copy
this
see
a
certain
key
from
master
one
on
to
master
and
Westar
three
and
you
copy
the
front
proxy,
see
a
master
to
mention
three.
E
Two
doesn't
have
a
copy
of
that
token.
So
it's
like
not
gonna
trust.
That
service
account
and
you
end
up
in
all
kinds
of
weird
interesting
failure,
states,
and
so
that
was
that's
actually
one
of
the
challenges
of
setting
up
a
tha
cluster
leveraging
pretty
much
any
kubernetes
solution,
regardless
of
whether
it's
q--
medium
or
any
of
the
others.
One
of
the
neat
things
that
landed
in
114
is
that
we've
automated
that,
and
so
what
I'm
gonna
walk
you
through
is
kind
of
like
what
that
looks
like,
which
is
pretty
cool.
E
So
to
that
end,
I've
stood
up
a
kubernetes
cluster
if
I
go
into
the
kind
of
directory
at
the
root
I
can
see.
This
is
the
stuff
that
actually
kind
of
ships
with
the
kind
node.
This
kind
directory
is
actually
what
we're
building
when
we
build
kindest
node
right
from
all
the
manifests
that
are
here,
the
queue
medium.com.
This
is
all
kind
of
part
of
the
cluster
bring
up.
If
I
look
at
Q
medium
comp,
it
is
basically
the
Q
medium,
a
configuration
core
for
this
particular
cluster
and
it's
somewhat
abbreviated.
E
E
So
what
I'm
doing
here
is
I'm
ignoring
Eve
pre-flight
errors
off.
Remember
that
what
I'm
doing
here
are
studying
a
buck
hoovering
this
cluster
inside
of
docker
containers
right
and
so
a
lot
of
the
a
lot
of
the
configuration
that
the
docker
container
is
exposed
to
is
configuration
that
I
use,
as
my
daily
driver
Linux
host
so
like
the
version
of
the
docker,
is
maybe
not
their
specific
version
that
it
wants.
E
E
And
so
what's
happening
now
is
the
QA.
Dm
is
actually
going
through
doing
the
work
that
it
needs
and
it's
actually
extending
up
my
it's
creating
it's
generating
the
certificates
necessary
to
stand
up
the
Cooper
news
poster
it's
generally
cute
configs
and
then
it's
generating
the
static
manifests
that
will
be
hosted
as
he
kubernetes
manifest
that
will
create
the
api
server,
the
controller
manager
and
all
about
other
stuff
once
it
gets
through.
All
of
that
work.
It'll
also
give
me
some
information
about
the
cluster
that
it's
standing
up.
E
E
And
then
it
really
makes
it
pretty
easy
for
you
to
go
ahead
and
expand
that
cluster.
So
now
that
you
have
a
single
master,
how
do
I
make
it
so
that
I
have
a
fully
h8
controlling
right
in
this,
and
what
the
output
of
here
is
like
if
I
need
to
stand
up
with
h8
cluster,
all
I
have
to
do
is
run
this
on
to
other
nodes
and
I
will
have
an
H
a
control
plane.
E
It
will
take
all
care
of
all
of
that
for
me,
and
so
I
want
to
show
off
that
capability,
so
I'm
just
going
to
copy
the
output
of
this
command
and
pop
it
over
here.
So
break
down
what's
happening
here
is
that
I
defined
as
part
of
my
key
medium.com
a
token
that
could
be
used
as
authentic
as
an
authentication
mechanism
for
these
other
control.
These
other
nodes
and
I've
also
described
what
the
CA
cert
hashes
and
in
this
way,
when
cube
Adium
on
this
new
node
tries
to
authenticate
with
the
old
node.
E
It
says
the
CAS
better
match
this
or
I
won't
trust
it
just
the
hash
of
the
significant
being
hosted
being
used
to
serve
the
API
server.
On
that
first
note,
and
then
this
experimental
control
plane
says
bring
up
another
control,
plane,
node,
not
just
another
node
and
the
certificate
key.
This
is
the
new
stuff.
The
certificate
key
means.
This
is
a
unique
key
that
is
used
to
encrypt
the
secrets
that
were
uploaded
as
part
of
the
experimental
certificate
upload.
So
over
here
I
do
export
cube
to
config
where
food
config
equals
at
sea.
E
A
E
E
Base
64
minus
B:
this
is
encrypted
text
right.
This
is
not
a.
This
is
not
something
that
a
user
could
read
and
is
encrypted
using
AES,
chichi,
AES
GCM,
and
that
not
and
that
key.
That
is
a
certificate
key.
So
this
key
right
here
is
actually
what's
being
used
as
the
encryption
formats
symmetric
encryption,
and
so
even
if
somebody
were
to
get
a
hold
of
the
this
secret
because
secrets
are
not
well
particularly
well
kept
within
kubernetes.
E
If
somebody
wanted
to
call
in
the
secret,
they
would
still
need
this
certificate
key
to
decrypt
this
stuff,
this
high
value
key
material
and
make
use
of
it,
which
is
pretty
cool.
So
since
I'm
giving
this
decryption
key
to
my
new
control,
plane
node,
it
will
be
able
to
do
a
fetch
of
this
secret
material
and
decrypt
it
and
make
use
of
it.
So,
let's
watch
that.
E
E
We
can
sell
it
a
student
down
lately
the
secret
queue
medium
search
of
the
cube
system,
namespace
and
then
it's
going
to
actually
make
use
of
those
certificates
and
it
will
and
it
will
generate
any
certificates
that
are
nodes
specific.
For
example,
the
Exedy
peer
certificate
will
be
unique
to
this
node.
The
health
check,
clients,
API
server
at
city,
client.
E
All
of
those
certificates
are
unique
to
this
particular
new
node,
but
the
ones
that
are
going
to
be
reused
are
going
to
be
the
existing
certificates
that
we
just
downloaded,
and
so
now,
if
I
do
cute
over
here,
I
do
kooky,
I'll
get
nodes,
I
can
see,
I
have
control,
plane
and
control
plane
too
and
I
also
know.
And
then
you
know
sorry,
let's
gonna
say
the
other
thing
I
want
to
point
out
here
was
that
one
of
the
other
really
interesting
thing
that's
happened
is
that
the
the
automation
that's
built
into
cube.
E
Atm
is
also
doing
this
part
right
and
so
what's
happening
here
is
when
I
stood
up
that
new
control
plate.
Node
I
used
all
the
shirts
from
the
first
control
plane
that
I
don't
know
that
I
needed
to,
but
I
also
went
ahead
and
made
the
command
made.
The
calls
necessary
to
join
this
new
sed
node
to
the
existing
sed.
Node
did
I
have
an
SD
cluster
of
two
members.
E
E
And
again
this
token
in
it
meant
to
be
used
by
just
one
node
I
think
all
of
these
tokens
and
these
secrets
are
actually
useful
for
any
number
of
things
within
a
two-hour
time
period.
So
that
was
the
other
thing
I
wanted
to
talk
about
was
like
as
part
of
this
security
mechanism.
What
we've
done
is
we've
actually,
if
I
again
go
back
to
that
secret
I
can
see
that
the
secret
that's
being
created
the
owner
reference.
E
Is
this
bootstrap
token
now
this
is
kind
of
a
tricky
thing
to
understand,
with
incriminated
it's
Carina's
now
has
this
like
built
in
garbage
collection
mechanism,
where
anything
that
is
related
to
related
by
this
idea
of
owner
reference.
If
the
owner
reference,
if
the
referenced
thing
gets
deleted,
then
all
of
the
things
that
are
belong
to
that
thing
will
also
be
deleted
and
that's
actually
handling
with
that's
being
handled
by
kubernetes
garbage
collection,
and
so
the
bootstrap
token
is
this
ID.
E
A
E
E
E
So
yeah,
but
my
point
is
that,
like
it's
relatively
it's
a
relatively
secure
implementation,
even
though
you're
putting
these
secrets
up,
you're
encrypting
them
this
token
that
I
actually
just
used
to
join
this
certificate
key
right.
This
is
never
actually
stored
in
cluster.
You
only
ever
see
it
in
the
output
and,
if
you
and
and
if
you
don't
have
this,
there's
no
way
to
actually
decrypt
that
value
and-
and
you
only
have
two
hours
to
get
it
right.
E
So
if
you
don't,
if
you
don't
come
across
that
certificate
key
in
two
hours,
that
the
data
is
gone
anyway,
very
very
cool
stuff,
one
other
thing
yeah.
So
now
we
have
a
career
custody
of
three
members,
but
it
knows
oh
I'm
scrolling
back
the
heck
with
it
who
it'll
get
those.
Then
we
have
a
three
mesh
ish
and
weave.
B
A
E
Can't
I
totally
can
like
at
the
moment
what
I've
done
is
I've
generated
this
token
at
a
23
hour
time
period
and
I've
done
that
it's
part
of
my
queue
medium.com
just
to
make
it
easier
to
stand
up
all
of
the
all
of
the
rest
of
my
cluster.
But
if
I
wanted
to
actually
mint
one
I
can
mint
one
pretty
easily
with
the
queue
medium.
A
D
E
E
D
E
A
E
B
B
E
E
Which
is
where
it
would
actually
have
checked
out
instead
of
waiting
for
that
for
the
automation
to
build
it.
What
I
might
do
is
to
do
a
git,
checkout
and
then
a
particular
release
branch
like
oh
two
one
and
then
I
could
go
into
the
hack,
build
directory
and
do
it
in
Clark
this
bash
script,
which
will
build
it
for
me
and
that
will
actually
build
effectively
against
a
known
good
tag
right.
E
E
A
B
C
C
A
B
D
B
A
D
B
B
A
D
I
would
say
the
one
thing
that
we
we
have
to
look
at
in
future
and
people
are
kind
of
already
working
on.
It
is
how
we
handle
dependencies
in
general,
how
we
do
a
package
building
right
and
are
and
deliver
artifacts.
So
anyone
who
has
gotten
really
excited
about
114
and
tried
to
pull
down
114
or
previously
used
113
notice
that
the
kubernetes
CNI
packages
we're
not
the
ones
that
you
needed.
D
There
was
also
recently
a
vulnerability
announcement
around
six
0.60,
where
you
know
you
should
like
optimally:
you're
using
0.7
dot
five,
but
we're
kind
of
still
working
out
how
we're
we're
gonna
be
handling
packaging,
debs
and
rpms.
So
there
are
a
few
one
or
two
caps
in
flight
regarding
that.
Well,
that's
something
that
we
want
to
nail
down
so
that
you
know
there.
We
don't
introduce,
don't
introduce
regressions
across
release
cycles
because
that's
has
not
been
fun.
D
So
what
will
often
happen
towards
the
end
of
the
release
cycle
is
someone
will
go
hey
this
new
version
ago?
Just
came
out.
Should
we
like
try
to
bump
it,
like
that
happens,
pretty
much
every
cycle?
Oh
so
this
time
around
I
think
you
know
there.
You
know
there's
the
interesting
case
of
like
introducing
something
that
you
know
can
use
go
modules.
D
So
you
see
a
lot
of
downstream
dependencies
like
so
we're
starting
to
work
on
it
for
the
cloud
provider
adder,
which
is
not
the
same
as
cluster
api
provider
after
so
that's
the
out
of
tree
provider
for
the
entry
cloud
provider,
if
you,
if
you
look
through
like
package
package,
slash
cloud
provider
right,
so
those
are
the
the
out
of
three
versions
of
the
entry
providers
they're
starting
to
assess,
go
modules.
The
Liggett
also
has
a
peer
up
for
assessing
Goma
modules
for
like
kubernetes
kubernetes,
which
is
pretty
interesting.
D
So
you
know
we're
big
borrowing
and
stealing
from
from
that
VR
as
needed.
But
I
think
that
you
know
switching
from
switching
to
a
version
that
supports
go
modules.
Out-Of-The-Box
makes
it
really
interesting
because,
like
all
these
downstream
dependencies,
like
we
haven't
done
it
in
kubernetes
kubernetes.
So
if
you
take
a
dependency
on
something,
that's
not
using
it
they're
still
using
glide.
D
So
using
like
the
hack
update
vendor
packages,
and
then
you
try
to
do
it
cleanly
in
your
repo,
like
you're
gonna,
have
a
bad
time
so
so,
like
personally
I'm
I'm
holding
any
of
my
repos
from
from
investigating
that
until
we
sorted
out
for
the
base
case,
you're
gonna
do
burn
Eddie's,
but
we're
starting
to
look
at
like
there
was.
There
was
a
keep
CTL
vulnerability
that
was
released.
D
So
if
it
wasn't
mentioned
on
the
call
anyone
who's
still
on
the
call
who
has
a
version
of
cube
CTL,
you
should
look
at
getting
the
latest
right.
So
on
the
1:14
path.
That's
one
14.04
the
one
13
path.
It's
one!
Thirteen
that
five
I
believe
that
was
released
yesterday,
one
twelve
might
be
like
one
twelve,
seven
I,
don't
really
refer
that
you
one
gulp
and
one
eleven
ones,
but
but
if
you.
A
E
That
vulnerability
works
is
actually
really
interesting,
so
it's
basically
a
cube
kettle,
CP
yeah,
and
so
the
idea.
The
idea
is
that,
like,
if
you
had
like
a
you're
gonna
copy,
something
from
a
an
exploited
container,
what's
in
your
kubernetes
cluster,
that
exploited
container
could
describe
within
the
tarball
something
at
a
different
path
and
when
you-
and
when
you
untoward
that
stuff
on
your
host
side
as
long
as
you
have
access
to
that
path,
you
could
actually
you
know,
write
whatever
file.
E
Content
was
being
was
being
copied
down
from
that
container,
so
it's
kind
of
a
corner
case,
but
it
is
good
thing
to
patch,
then
what
the
node
one
is
actually
more
interesting.
The
networking
one
is
basically
that
there's
an
order
to
these
things
and
by
default,
the
port
mapper
stuff
that
within
CN
I
was
actually
injecting
rules
as
a
preset.
They
were
preparing
rules
that,
but
it
might
be
related
to
through
ports
that
were
specific
to
the
port
member
case.
E
And
if
you,
if
you
do
that,
that
means
that
you
have
a
way
of
actually
governing
how
traffic
will
move
through
that
node,
regardless
of
whether
it's
traffic
for
that
particular
application,
which
is
really,
which
is
the
exploit
surface,
that
one
is
I,
think
more
risky
and
that
one
is
definitely
worth
fixing
and
I'm.
Looking
forward
to
seeing
that
patched,
yeah.
E
E
B
D
From
from
what
I
caught
it
was
awesome,
I
think
this
is
a
great
format.
We
should
keep
moving
forward
on
it
if
there
were
already
announcements
if
I
have
like
a
second
I
will
say
that
there
is
a
a
working
group,
LTS
survey
out
about
the
future
of
kubernetes,
whether
or
not
LTS
is
something
that
we
should
assess.
I
can
post
links
somewhere,
Nick
I
can
offline
with
you
and
figure
out
where
to
post
these.
D
Secondarily,
the
115
release
team
is
open
for
shadows
right
now.
We
we
kicked
out
the
shadow
questionnaire
last
yesterday,
I
believe
so
I
can
I
can
get
you
links
to
that.
If
you're
interested
here,
if
you're
in
participating
on
the
release
team
I,
think
it's
a
really
really
valuable.
Experience
won't
probably
I
think
one
of
the
most
valuable
experiences
and
shadow
processes
that
we
have
in
the
community,
therefore,
including
that
please
feel
free
to
reach
out
or
check
out
that
questionnaire,
yeah.