►
Description
Notes: https://github.com/vmware-tanzu/tgik/blob/master/episodes/101/README.md
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be continuing the Grokking series with: exploring the API Server
A
Good
afternoon
everybody
and
welcome
to
episode
101
the
one
hundred
and
first
episode
of
tea
gik
last
week.
If
you
haven't
checked
it
out,
definitely
worth
going
back
to
watch
that
one
TG
@
ke
dot,
io
/
100.
It
was
our
100th
episode
and
we
kind
of
celebrated
by
getting
everybody
together.
It
was
super
fun
to
follow.
It's
super
fun
to
do
so
this
week,
I'm,
actually
gonna
keep
going
with
the
grokking
series.
I'm
gonna
keep
going
with
the
API
server
and
I
gotta
warn
ya
I.
A
A
Here
episode
101
as
well
from
the
Eastern
Province
of
Saudi
Arabia,
and
we
have
the
Mady
go
Mady
and
he
is
with
us.
Every
Friday
he's
actually
everybody
here.
So
far,
it's
a
long
time,
olefination!
That's
awesome!
We
got
Rory
from
the
Scottish
Highlands
Thank
You
Rory,
it's
a
rich
fish
toy
from
Hamburg
I'm
laughing,
because
Roy
is
actually
from
a
town
in
Scotland
that
I
promise
I
will
like
absolutely
slaughter.
If
I
try
to
say
it,
but
I
look
forward
to
catching
him
at
the
next
cube
con.
A
Where
you
can
instruct
me
on
how
to
actually
say
it
correctly.
So
that'll
be
fun.
Yeah
Ola
from
Copenhagen
Denmark
got
Ramesh
from
here
in
SF,
Philip
Martin
from
Paris.
Maybe
it's
Phillipe
Martin
I'll
get
marcin
from
Cracow.
We
have
George
who
will
be
helping
us
with
notes,
and
things
like
that.
That's
awesome
good
to
see
you
George.
We
have
huge
thank.
A
Been
enjoying
doing
them,
it's
a
it's,
a
pretty
fun,
a
pretty
fun
series
for
me
again,
I'm
looking
off
to
my
left
here,
because
I
have
that's
where
the
pop
out
for
the
chat
is
yarn
from
Germany,
and
we
have
me
you
from
Perth
Australia
hope,
you're
staying
safe
out
there
I
know
that
it's
been
a
lot
of
fires
and
a
lot
of
smoke.
I
don't
have
everything
I'm,
not
sure.
If
that's
true
for
Perth
I
mean
it's
a
big
continent,
but
there's
been
a
lot
of
fires,
so
stay
safe
out
there.
A
A
A
A
It
going
we
got
mode
from
India
and
we
got
some
love
for
me
from
NYC
I,
gotta
love.
Him
got
a
lot
of
lovin
NYC,
it's
beautiful
city,
I'm
on
the
all
the
way
opposite
coast,
but
I
really
want
to
get
back
out
there.
More
often
yeah
and
you
came
saying
hello
from
Toronto
and
Peter
from
Sweden.
So
hey
everybody
good
to
see
you
all
again,
such
a
global
audience.
It's
it's
always
it's
always
such
an
amazing
thing.
So
it's
like,
let's
jump
into
it
here,.
A
This
week's
notes,
as
usual,
are
up
at
TGI
k,
io
/
notes,
so
if
you
have
notes-
or
you
want
to
keep
them,
this
is
where
you
can
put
them
really
good
stuff
in
here
and
let's
kind
of
dig
into
what's
happening,
an
exciting
one.
Super
exciting,
in
my
opinion,
is
this
has
actually
been
in
the
in
the
works
for
some
time.
I've
kind
of
been
waiting
for
this
to
happen.
So
it's
finally
completed
and
it
is
we,
the
kubernetes
community,
is
announcing
a
bug,
bounty
program
which
is
really
exciting
and
it
was
announced.
A
This
is
this
January
14th,
not
a
lot
of
action
yet
and
we'll
look
at
that.
We're
looking
to
see
where
we
can
see
that
action
here
in
just
a
minute,
but
it
is,
it
is
now
open.
So
there's
a
really
good
proposal
for
how
to
I
mean
they
cover
like
basically
how
they
picked
the
vendor.
What
the
working
draft
up
components
are
in
scope,
a
lot
of
good
information
inside
of
here
around.
A
So
what's
in
scope,
but
bounty
covers
code
from
the
main
kubernetes
organizations
on
github,
as
well
as
continuous
integration
release
and
documentation
artifacts.
So
if
you
were
to
find
a
way,
for
example,
to
inject
code
into
an
artifact
or
change,
materially
change,
an
artifact
or
documentation
that
is
hosted
on
the
kubernetes
website
that
would
be
worth
about,
you
know
a
lot
of
that
same
thing
for
just
the
way
that
the
system
itself
interacts.
A
So
as
we
get
into
it,
we'll
we'll
cover
a
little
bit
more
about
what's
happening
there,
there's
some
really
great
links
in
the
talks
and
the
frameworks
and
hardening
guides
about
how
to
tools
that
are
used.
There
are
tools
that
are
used
to
kind
of
define
what
a
secure
cluster
might
look
like,
and
there
is
already
an
existing
process
for
filing
a
bug
or
or
actually
in
communicating
with
security
at
kubernetes
diode
directly.
A
If
you
feel
like
you,
have
found
an
issue
that
is
a
security,
related
issue
yeah
and
we're
pretty
clear
in
the
documentation
here
in
the
existing
process-
documentation.
Let's
just
go
there
real,
quick,
so
the
vulnerabilities,
if
you
find
a
vulnerability
with
a
component
within
kubernetes
or
the
way
that
kubernetes
components
themselves
interact.
A
This
is
the
process
through
which
you
can
do
that,
and
they
do
actually
cover
what
what
a
vulnerability
is
and
when
you
should
report
it,
and
they
also
cover
very
important
detail,
which
is
that
the
public
disclosure
timing
is
going
to
be
negotiated
between
the
kubernetes
product
security
committee
and
the
bug
submitter.
So
we
really
do
appreciate.
You
know
the
careful,
the
careful
handling
of
vulnerabilities.
There
are
a
lot
of
people
who
are
out
here
who
are
running
kubernetes,
so
the
next
thing
I
wanted
to
show.
You
is
the
hacker
one
website.
A
So
this
is
where
you
would
go
if
you
actually
wanted
to
participate
in
this
bug.
Bounty
is
where
you
would
submit
the
report
and
they
actually
cover
the
the
policy
stuff
right
in
here
around
alike.
What
the
program
rules
are
what
the
rewards
are,
and
we
could
see
that
some
of
those
rewards
are
actually
pretty
awesome.
I
mean
like
gatir,
one
vulnerability,
encore
kubernetes
and
it's
determined
to
be
critical.
That's
it
that's
10k
in
your
pocket.
You
know
so
keep
your
eye
out.
A
If
you
see
behavior
that
doesn't
look
quite
right
or
things
that
aren't
working
the
way
you
expect-
or
you
know
you
know
some
of
the
assumptions
around
those
sorts
of
things-
I
mean
that's
actually
kind
of
like
how
Darren
Shepherd
identified
age
secure
a
security
concern
earlier
this
year
was
basically
in
his
interaction
with
kubernetes
like
this
seems
weird
that
it's
working
this
way
and
that
turned
out
to
be
like
a
pretty
interesting
exploit.
So
keep
your
eyes
out
pretty
cool
stuff.
A
A
Thing
that
I
was
super
impressed
by
George,
who
is
learned
our
moderator
here
he
actually
put
up
a
post,
a
tweet
I
think
he
expected
to
put
this
up
kind
of
earlier
in
the
closer
to
the
New
Year,
but
I
thought
this
was
really
great,
so
I
wanted
to
share
this
with
y'all,
so
this
is
actually
a
view
into
the
zoo
meetings
for
kubernetes,
and
this
is
really
fascinating,
because
if
you
think
about
it,
this
is
a
distributed
company.
You
distribute
culture.
However,
you
want
to
consider
it.
A
Maybe
maybe
George
can
correct
me
on
that.
If
that's
correct
but
I
think
if
that's
the
whole
shootin
match
everything
that
we
have
on
suiting
for
those
sorts
of
things,
but
what
was
neat
about
this
I
thought
was
this
last
number
right
here
right,
which
is
where
we
look
at
them?
The
number
of
clients
which
I
thought
was
pretty
cool,
so
we
got
a
little
over
53%
Mac,
which
isn't
too
bad.
You
were
surprised,
lots
of
Mac
lots
of
folks
use
Mac's
out
there
for
development.
We
have
26%
Linux,
which
blew
my
mind.
A
A
That's
huge
yeah,
it's
mostly
sig
meetings
by
the
way
so
and
then
we
have
12.2%
windows
and
so
I
just
thought
statistics
wise.
That's
a
really
fascinating
statistic,
so
we
have
a
whole
bunch
of
meetings
in
classic
cluster
lifecycle.
We
got
a
cig
release,
six
storage,
sir!
That's
how
many
contributor
excellence
there's
a
bunch
of
other
information
in
here,
so
I
thought
this
was
really
exciting,
so
definitely
worth
chatting
about
what
do
we
have
up
next.
A
Stuff
from
the
now
monthly
kubernetes
community
meeting
code
freeze
is
coming
up,
Doc's
will
be
completed,
interviewed
on
March,
16th
and
kubernetes
version.
One
eighteen
zero
will
be
released
on
March
24th,
so
we're
already
working
on
one
eighteen,
I'm
amazed
by
that
I'm
actually
having
a
little
poll
for
us.
Just
out
of
curiosity
I
know,
I
use
it
on
Linux
all
the
time
as
well
of
the
people
here
in
the
chat
who
are
operating.
Kubernetes
I
want
you
to
put
the
version
of
kubernetes
that
you're
operating
into
the
chat
could
be
116.
A
A
A
A
A
What
version
that
was,
but
I'll
stop
my
head,
but
yeah
like
that
was
a
pretty
big
jump
because
it
meant
basically
we're
pacing
at
clusters
with
that
new
version
and
so
I
think
116
when
we
see
a
wider
adoption
up,
but
that's
going
to
be
a
little
more
work
and
we
talked
about
why
that
is
basically
API
is
being
deprecated
problems
and
the
concern
around
that
so
I
think
it's
gonna
be
tricky.
That's
true,
actually
I!
Think
yeah!
That's
is
that
like
GA
or
is
that
in
preview,
still
I
can't
really
remember
well.
A
A
That's
coming
right
up,
yeah
you're
interested
in
what's
happening
with
the
that
could
base
I
feel
like
we've
talked
about
this
before
I
know
that
I've
shown
you
this
website
and
if
I
haven't
here,
it
is
again
if
you
go
to
lwk
d
dot
info,
you
can
actually
follow
along
here,
and
this
will
cover
things
like
the
patch
releases
that
are
coming.
So
it's
really
good
kind
of
operational
information
for
people
who
are
using
it.
A
A
Website
that
you
can
contribute
to
so,
if
you're
following
some
particular
issue
that
is
important
to
you,
feel
free
to
throw
it
in
here,
I
mean
like
there.
You
can
absolutely
open
an
issue
and
they
have
some
really
good
ways
to
actually
interact
like
you
can
put
in
a
pull
request
or
issue
at
the
L
wkd
github
repo
linked
here
at
the
bottom.
But
there's
lots
of
really
good
stuff
here
right,
so
cute
proxy
iptables
know
mode
now
supports
dual
stack.
That
was
worked
on
by
Valerie.
We
talked
about
that
last
week.
A
Q
medium
supports
ought
to
retry
for
image
pools
in
case
you
have
a
flaky
upstream
connection,
which
means
you're.
Gonna
have
other
problems
later,
but
you
know
that's
that's
a
different,
that's
a
different
problem.
You
got
set
provider
ID
for
cloud
node,
even
if
error,
which
is
good,
you
have
the
volume
binder
stuff
like
there's.
A
number
of
different
last
seen
seconds
is
down
metric.
That's.
A
That's
that
can
be
kind
of
a
rough
one
prevent
pods
from
Interac
ureter
remaining
on
ready,
but
there's
lots
of
interesting
other
merges
in
here
to
check
out,
and
they
also
have
some
feature
gates
that
are
being
removed
and
there's
some.
You
know
this
is
a
good
period.
It's
a
periodical
of
note.
I'll
say.
A
Next
up,
we
have
cross
plane
cross
plane
to
IO.
Why
am
I
here
has
turned
version
dot.
Six,
so
they're
blog
here
talking
about
version
at
six
enabling
application
delivery
platforms
on
the
road
to
production
readiness.
After
recently
turning
one
year
old,
cross
plane
project
is
excited
to
have
closed
out
2019
by
going
to
version
1.0.
Not
six,
so
cross
plane
is
actually
a
really
interesting
tool,
because
its
goal
is
to
actually
enable
you
to
route
traffic
kind
of
at
a
higher
level
than
the
cluster.
A
So
if
you
go
to
across
planes
website,
they're
basically
trying
to
be
the
open
source,
multi-cloud
control
plane,
and
they
want
you
to
help
them.
They
want
to
enable
you
to
kind
of
like
manage
cloud
native
applications,
infrastructure
across
environments,
clusters
and
regions
and
clouds,
and
so
that's
all
pretty
exciting-
that's
out
there
and
they
are
now
at
version
0.6.
So
that's
true!
That's
super
exciting.
A
What
else
we
got
here,
header
and
host
rewrite
in
contour?
This
is
the
one
that
I
was
talking
about.
So
this
is
Steve
loco.
Why
don't
you
used
to
work
with
me
on
field
engineering
teams
and
has
moved
into
development
primarily
now
and
so
he's
working
pretty
heavily
on
contour
contour,
if
you're
not
already
familiar,
is
a
as
an
ingress
controller
that
we
developed.
A
It
helped
you
in
conjunction
with
a
couple
of
customers
that
we
worked
with,
and
this
ingress
controller
is
you
know,
backed
by
envoy
and
to
our
pieces,
basically
the
control
plane
for
all
of
it.
So
contour,
it
is
an
incredible
ingress.
Controller
provides
a
lot
of
capability,
and
this
is
this.
Article
is
talking
about
one
of
the
some
of
the
newer
capabilities,
the
ability
to
rewrite
header
and
host:we
to
be
the
ability
to
manipulate
headers,
which
is
super
exciting,
and
if
that's
the
thing
that
you're
looking
to
do
with
an
ingress
controller.
A
Some
of
the
other
stuff-
that's
actually
happened
in
contour
recently,
is
also
pretty
interesting,
so
ingress
route
deprecation,
since
the
release
of
contour
1.0
HTTP
proxy,
has
became
a
successor
of
ingress
route
going
forward
and
one
circle
for
our
users.
So
we're
talking
about
the
way
that
we're
going
to
do
an
a
deprecation
of
a
filled-in
tour
or
kind
into
a
different
value,
and
they
talked
about
future
plans.
A
So
lots
of
exciting
stuff
happening
here
so
definitely
check
that
article
out
if
contour
is
an
ingress
controller
that
might
work
for
you
or,
if
you're
interested
in
knowing
more
about
what's
happening.
We've
got
an
interval
from
Michael
housing.
Blous
talking
about
kubernetes
on
raspberry
PI's
Michael
also
operates
that
kubernetes
security
info
I
think
a
page
which
talks
about
a
lot
of
secure
stuff
that
you
can
do
inside
of
companies.
So.
A
A
A
So
in
the
chat
we've
got
a
few
other
people,
opiate
OCP,
is
by
the
end
of
the
month.
My
Maddy
saying:
does
anybody
happen
to
know
why
you
kids,
like
so
much
you
know,
I,
think
it's
actually,
as
you
define
more
and
more
of
the
of
a
heavy
integration
between
the
features
that
you
are
supporting
and
like,
especially
if
you're
moving
the
control
plane
off
onto
your
own
equipment
when
you're
thinking
about
the
upgrade
path,
it's
a
bit
more
work
to
actually
make
sure
that
it's
vetted
and
some
unsupportable
moving
forward.
A
A
A
Next
up
we
have
designing
and
building.
Oh.
This
is
an
article
by
another
one
of
the
co-workers,
mr.
dan
Finneran,
who
is
in
the
UK.
He
wrote
this
article
I
think
you've
been
working
on
this
for
some
time
and
I.
Remember
him
commenting
something
like
you
know,
sat
down
to
write
a
small
article
and
ended
up
with
a
huge
article.
So
if
bare-metal
kubernetes
is
your
is
your
particular
flavor
definitely
check
this
out.
Dan's
thoughts
are
you
know
pretty
well
put
down
here
and
I?
A
A
We
have
cue
ball
from
APSCo
my
friend
tomorrow.
His
company
is
actually
released
a
new
version
of
cue
ball.
It's
now
version
0.3,
there's
been
a
lot
of
changes.
I've
seen
some
change,
some
change
lately
from
apps
good
they're
like
reformatting
or
refactoring,
some
applications
to
become
more
to
make
them
more
usable
for
people.
So
that's
always
great.
A
What
this
is
going
to
do
is
it's
an
operator
to
support
vault
on
top
of
kubernetes.
So
it's
a
way
of
just
agnostically
supporting
vault
on
any
kubernetes
cluster,
rather
than
actually
having
vault
kind
of
outside
of
that,
and
they
do
some.
They
do
offer
commercial
support.
So
what
you're
doing
so?
If
this,
if
you're,
looking
to
deploy
a
vault
within
your
kubernetes
clusters
and
actually
met
used
well
to
integrate
your
secrets
or
credential
management,
this
is
a
great
way
to
go
about
that.
A
I,
actually,
I
think
I
covered
this
in
an
episode
called,
keep
it
secret
I
think.
But
if
I
didn't
pretty
sure
I
did
this
is
actually
the
toilet.
I
was
using
to
deploy
a
vault
inside
of
kubernetes.
So
definitely
check
that
one
out
if
it's
entropy,
if
it's
interesting
to
you
the
last
article
for
the
day,
is
coober
Naughty
by
Jane
Knoller.
This
is
diagnosing
and
chasing
kubernetes,
kubernetes
I.
Don't
think
I
could
say
that
five
times
fast,
but
it
would
be
fun
to
try
so.
A
Forty-Four
yeah,
that
is
interesting,
so
forty
are
at
116
and
seven
for
117
yeah.
That
makes
sense.
Good
statistic,
think
you
were
all
right.
Those
are
all
of
our
articles.
The
news
and
kubernetes
that
took
24
minutes,
I
gotta,
get
that
down
I,
think,
but
there's
so
much
fun,
there's
so
much
fun
stuff
that
happens
every
week.
That
I
want
to
talk
about.
So
it
is
what
it
is.
A
A
You
can
kind
of
write
your
own
end-to-end
tests
that
are
actually
validating
just
those
particular
features
and
then
returning
the
results
of
those
over
time.
So
I
will
start
with
an
end
end
test
and
make
sure
that,
like
everything's
working
the
way,
I
kind
of
expect
and
then
proceed
from
there.
You
know
the
other
thing
I
might
do
is
obviously
I
think
that
there
are
a
lot
of
components,
use
events
to
signify
things
that
are
working
or
not
working,
and
so,
like
any
Linux
system.
I
think
you
know.
A
Having
being
able
to
go
to
the
logs
or
understand
those
things
it's
important
and
the
last
one
I'll
call
out
because
I
think
it's
obviously
super
important
is
metrics
so
being
able
to
alert
on
and
be
able
to,
you
know
have
an
understanding
of
where
to
go
and
look
for
the
behavior
of
particular
metrics
as
it
relates
to
the
kubernetes
platform
over
time.
It's
also
a
pretty
killer
one
right,
because
then
you
can
go
to
Prometheus
or
your
Hana
instance.
A
However,
you
got
to
wire
it
up
and
take
a
look
at
how
the
general
system
is
operating.
So
if
I
were
to
go
that
about
how
kind
of
start
at
Etsy
be
work,
my
way
up
the
stack
right.
So
it's
that
city,
healthy
and
happy
connections
are
good
you're,
not
seeing
like
a
lot
of
io
weight
or
anything
else
like
that,
then
I
move
on
to
the
components
themselves
and
then
I
move
down
into
R.
A
A
B
A
So
what
I've
done
here
is
I
just
created
a
service
account
called
test
in
the
default
namespace
and
then
I
created
a
cluster
role,
binding
assigning
tests.
The
service
account
a
cluster
role
of
cluster
admin
and
now
I'm
gonna
do
a
little
script
called
create
cube
config.
You
can
look
at
that
script,
but
basically,
what
it's
doing
is
it's
gonna
create
a
cube
config
for
the
service,
account
that
I
point
out
so
I'll
say
test.
A
A
A
A
So
what
I'm
doing
here
is
I'm
basically
running
a
cute
cuddly
and
cute
kiddo
to
get
pods
and
I'm
running
it
very
verbose
and
I'm
gripping
for
the
bearer
token-
and
this
is
interesting-
because
if
you
do
that
at
the
verbose
mode,
then
what
you're
getting
from
cute
cattle
is
what
the
curl
command
was
to
issue
this
particular
get
right.
So
we
can
see
that
we're
doing
X
get.
A
You
can
see
we're
appending
a
authorization
bearer
token,
which
represents
the
token
that
is
known
inside
of
my
cube
config
and
then
we're
specifying
the
IP
address
of
the
API
server
and
we're
giving
it
the
API
path
that
we
want
to
get
so
we're
gonna
get
500
pods
back
by
default.
A
really
cool
thing,
I
think
Kelsey
Hightower
turned
me
on
that
one,
the
first
time.
So
this
shows
us
that
how
the
token
authentication
is
working
right
if
I
were
to
go
ahead
and
unset.
A
My
cube,
config
and
I
do
the
same
command,
but
without
that
I'll
see
that
there
is
no
bearer
token
and
that's
because
I'm
using
the
certificate
off
by
default,
cube
config.
You
can
oh
configure
you
minify
I
can
see
that
inside
of
my
under
my
user.
The
thing
I'm
actually
using
to
authenticate
to
kubernetes
I
have
client
certificate
data
and
client
key
data,
so
this
is
actually
the
way
that
I'm
authenticating
to
the
API
server
by
default
is
by
using
a
certificate.
A
B
A
Cute
kettle
gets
a
say,
a
dashing
coupe
system.
There
was
for
a
time,
I
think
it
is
probably
still
true
for
a
lot
of
you
so
bear
with
me.
There's
this
roll,
this
cluster
roll
aggregation
controller
right
and
what
I
wanted
to
do
was
actually
determine
what
permissions
this
cluster
aggregation
of
growth,
cluster
roll
irrigation
controller
has
inside
the
cluster
or,
for
example,
if
I
wanted
to
just
go
through
all
of
the
system
service
accounts
inside
of
the
cube
system
namespace,
and
understand
what
permissions
they
might
have.
A
System
if
I
look
at
the
cluster
role,
aggregation
controller
I
do
describe.
I
can
see
the
the
token
that
it's
being
used.
It's
been
assigned
to
that
role,
that
cluster
role,
that
controller
sorry
is
actually
held
inside
of
a
secret
that
is
named
cluster
role,
aggregation
controller
token,
random
six-character
thing
in
the
back
right.
So
if
I
do
cute,
Kettle
described
secret
and
goob
system
and
then
I
look
for
that
secret,
then
the
results
of
that
command
will
show
me
the
token
that
is
being
used.
Currently
by
that
controller.
A
A
I'm
now
using
a
token
I'm
using
the
cube,
config
test
tokens
and
that's
how
I'm
gonna
authenticate
by
default
right-
and
we
can
see
that,
because,
if
I
do
my
grip
for
bear
I
can
see
that
I'm
passing
a
bearer
token,
which
is
good.
But
now
I
want
to
pass.
This
other
bear
took
a
different
bear
token
right,
the
one
the
token
that
is
actually
being
used
by
the
cluster
aggregation
controller.
A
A
A
A
A
B
A
So
what's
happening
here
with
this
little
thing,
you
probably
most
of
you
probably
already
know
it,
but
if
you
don't
what's
happened,
is
I'm
taking
the
standard
error
output-
that's
what's
represented
by
this
too
and
I'm
appending
it
to
the
standardout
and
that
way
I
can
actually
grip.
It
kind
of
neat
thing
right,
so
I
could
see
that
the
token
is
being
sent
right,
and
so
that's
a
good
thing.
So
now
the
token
is
like
being
sent
but
I'm
still
using
certificate
off.
So
now
here's
the
part
that
really
confused
me
Oh.
A
And
we
can
see
that
the
output
of
this
is
very
different
than
the
output
of
what
the
actual
service
account
can
do,
which
is
what
threw
me
well
the
reason
this
is
happening.
It
took
me
a
minute
to
figure
this
out.
This
is
actually
to
do
with
authentication,
so
I
feel
good,
we're
covering
it.
The
reason
this
is
happening
is
because,
when
you
use
certificate
authentication,
it
happens
really
early
in
the
process
right,
because
you're
actually
negotiating
that
with
the
API
server
and
a
TLS
layer
at
a
connection
layer
right.
A
So
when
you
actually
authenticate
to
the
API
server,
you
say
here
is
my
client
certificate
and
you're
actually
encrypting,
that
traffic
back
and
forth.
Using
that
clients
did
we
get
the
server
certificate
that
the
API
server
has
and
in
that
exchange?
That's
where
your
authorization,
sorry,
your
authentication
happens.
A
So,
even
if
I
embed
a
token
at
that
point,
it
would
be
ignored
because
I
am
already
authenticated,
and
that
was
the
part
that
really
tripped
me
up
so
because
I'm
already
authenticated
with
the
certificate
I
can
embed
any
token
I
want
and
it
will
be
completely
ignored
because
I
authenticate
him
with
a
certificate
now
one
way
to
work
around.
That,
of
course,
is
to
authenticate
with
a
token,
which
is
what
I'm
doing
here
with
the
export
right.
A
So
now,
if
I
do
cute
kid
a
little
get
or
configure
you,
we
can
see
that
I'm
providing
a
token
to
authenticate
and
now
that
I'm,
using
a
token
to
authenticate
I,
can
override
that
token
by
passing
the
token
or
argument.
So
that
was
kind
of
mind-boggling
to
me.
But
I
wanted
to
share
that
with
you,
I
thought
that
was
pretty
neat
yeah.
A
A
A
You
do
grab
that
that
aggregation,
controller
I
think
might
have
been
Rory.
Actually
they
told
me
this
originally
was
that
for
a
while
this
particular
cluster
roll
was
just
using
an
admin
and
admin
permission.
Cluster
admin
permission:
it
wasn't
actually
breaking
it
down
to
the
escalate
verb.
It
was
actually
allowing
anything
within
it
with
any
any
resource.
That
means
that
if
I
were
able,
if
I
were
able
to
impersonate
this
cluster
role,
this
is
true
of
anything
below
1/16.
A
If
I
remember
correctly,
then
I
could
then
I
have
cluster
admin
across
the
whole
cluster,
so
be
careful
that
your
users
don't
have
a
read
on
secrets
inside
of
the
cube
admin
or
inside
of
the
cube
system
namespace,
because
if
they
do,
they
could
easily
escalate
their
privilege.
This
has
been
fixed
as
of
116,
but
that's
just
something
to
know
some
interesting
stuff
out
there
all
right.
A
I
think
that
link
has
recently
changed
so
I
want
to
talk
about
some
of
the
ways
that
things
authenticate
and
authorize
have.
We
talked
about
this
a
little
bit
in
the
last
episode.
We
talked
about
how
the
node
authenticates
and
we
talked
about
some
of
the
ways
that
we
can
limit
the
node
to
actually
resources
that
are
only
known
about
by
that
node.
And
if
you
go
back
to
episode,
19
I
think
it
was.
A
A
I
definitely
recommend
checking
it
out,
but
you
know
like
some
of
the
permissions
that
you
could
actually
set,
so
you
can
like
a
user
named
Alice
and
here's
like
the
attributes
that
you
can
do
in
their
ways.
You
can
determine
that
attribute
from
the
certificate
and
other
things
like
that.
Our
Beck
was
introduced,
really
really
yeah,
but
that's
out
there
now.
So
not
a
lot
of
folks.
In
fact,
as
far
as
I
know,
I
can't
think
of
a
single
instance.
A
I've
been
surprised
before,
but
our
back
is
the
way
that
we
have
now
and
our
back
is
actually
where
we
get
things
like
kind
of
the
model
for
cluster
rolls
and
cluster
Bowl
bindings
bro
bindings.
All
of
that
good
stuff,
so
I
kind
of
want
to
walk
through
that
and
I
want
to
also
talk
about
some
of
the
features
that
our
mat
provides
and
its
capabilities
so
yeah.
It
looks
like
upgrading
from
1.5.
It
was
the
change
that
were
pretty
pretty
big
change,
so
our.
A
A
The
way
to
think
about
these
are
to
put
them
in
your
head:
a
roll
can
be
constrained
to
a
namespace,
a
cluster
roll
is
made
available.
Cluster
wide
and
the
roll
itself
is
really
just
a
grouping
of
rules
right.
It's
a
grouping
of
rules
that
describe
what
you
can
do
and
to
what
resource,
but
when
you're
defining
a
roll,
you
can
only
define
that
roll
as
it's
constrained
to
a
namespace,
but
you
can
go
find
a
cluster
roll
even
with
the
very
same
definition
across
the
entire
cluster.
A
Now
what
what
falls
out
of
this
is,
if
you
define
a
role
within
a
namespace,
you
can
only
use
it
within
that
namespace.
You
can't
reuse
it
in
another
namespace
it
can
only
ever
be.
You
can
only
be
bound
to
service
accounts
and
entities
within
that
with
regard
to
that
namespace,
it
can't
be
bound
to
any
entity.
It
can't
be
bound
outside
of
that.
A
So
I
just
created
this
cluster
row,
I
just
created
a
role
and
if
I
do
keep
it
I'll
get
roles.
I
can
see
that
role
has
been
defined
role,
test
just
described
that
role
and
that
what
that
role
is
doing
is
its
allowing
me
to
do
a
thing.
That's
like
just
basically
get
pots,
but
this
role
is
defined
within
this
namespace.
So
if
I
were
to
do
and
new
system,
for
example,
there
would
be
no
role
called
test
now.
Why
would
you
want
to
be
able
to
define
a
role
within
a
namespace?
A
A
A
A
So
the
reason
I
might
want
to
define
a
role
within
a
namespace
is
actually
to
allow
for
the
administrator
of
a
given
namespace
to
define
roles
that
are
going
to
be
associated
with
service
accounts
for
entities
with
regard
to
that
namespace
that
they
are
the
administrator
of
which
is
pretty
neat.
That's
why
I
might
want
to
do
it
if
I
define
the
cluster
role,
however,
you
could
all
get
cluster
roles.
A
System
there
are
some
default
ones
that
are
defined
within
the
cluster.
We
got
view
we
have
edit
cluster
admin
and
admin.
These
are
the
default
ones.
Kind
net
is
actually
put
there
as
part
of
the
kind
of
network
that's
deployed
inside
of
Cooper
inside
of
a
kind,
but
these
other
four
are
generically
applied
and
created
within
the
cluster.
When
you
do
the
install-
and
these
are
made
available,
cluster
wide,
so
I
can
bind
an
admin
to
a
given
name.
A
Space
by
using
a
roll,
binding
or
I
can
give
cluster
admin
level
permissions
by
using
by
defining
a
cluster
of
role,
binding
so
very
similar
to
roles
defined
at
the
namespace
scope
or
roles
to
find
at
the
cluster
scope.
You
have
cluster
role
and
role.
That's
the
same
thing
you
have
as
the
binding
itself
the
way
that
you
actually
apply
that
permission
to
a
user
described
cluster
roll
view.
A
So
this.
For
this
permission,
the
view
permission
is
meant
to
be
kind
of
a
read-only
view.
That
is
a
reasonably
secure
implementation
of
view,
that
is,
that
can
be
scoped
to
a
namespace.
So,
even
though
it's
defined
as
a
cluster
role,
you
can
actually
constrain
a
user
to
this
role
within
a
given
namespace
by
defining
a
cluster
role
binding.
So
keep
get
a
little
create
si
view.
A
A
And
we
can
see
that
the
permissions
are,
that
view
permission
set
right
so
same
thing
that
we
saw
before
so
this
allows
me
to
impersonate
a
user.
That's
what,
as
is
allowing
me
to
do
it
so
I
can
impersonate
a
service
account
and
interact
with
the
cluster,
so
keep
it
off
KNI
list.
As
this
particular
view,
user
I
can
see
what
permissions
they
have.
A
The
last
thing
I
want
to
talk
about
from
the
our
Mac
perspective,
so
we
talked
about
roles
and
cluster
roles.
We
talked
about
rural
bindings,
which
are
constrained
to
a
namespace
and
cluster
bindings,
which
are
cluster
wide.
The
next.
The
last
thing
I
wanted
to
point
out
here
was
the
the
aggregate
stuff
and
that's
actually
kind
of
wild,
and
so
let's
look
at
that
real
quick.
So.
A
A
If
you
see
the
label
match
labels,
our
back
authorization
case,
the
I/o
aggregator
view
as
true
then
I
want
you
to
take
the
rule,
the
rules
that
are
associated
inside
of
that
other
rule
or
inside
of
that
other
role
and
bubble
them
up
under
this
particular
permission
set,
which
is
wild
right.
That
means
that
if
I
define
some
subset
of
rules,
I
could
aggregate
those
rules
up.
A
So
if
there
are
any
entities
that
have
this
particular
annotation
set
to
true,
then
they
are
helping
define
what
resources,
verbs
rules
and
such
are
actually
going
to
be
effective
for
this
particular
cluster
role
and
further.
We
see
this
right.
This
label
value
here,
where
we
specify
I
effectively
kind
of
the
implementation
on
the
other
side
of
that
right.
A
So
this
particular
cluster
rule
is
allowing
its
permissions
to
bubble
up
into
edit
so
because
this
is
configured
in
this
way
that
will
allow
us
to
ensure
that,
when
the
Edit
rule
is
to
find
any
rules
like
the
ones
defined
in
this
role,
role
are
bold
up
to
the
Edit
rule,
so
this
actually
allows
us
to,
and
we
can
do
this,
not
just
with
the
Edit
role
of
those
sorts
of
things.
We
can
actually
also
obviously
take
this
abstraction
of
it
further
right.
A
So,
if
you're,
defining
particular
subsets
of
permissions
and
you've
broken
them
up
in
ways
that
make
sense
for
your
organization,
you
can
use
aggregation
as
well.
So
that's
the
last
thing
that
last
piece
that
I
wanted
to
talk
about
and
then
from
the
code
perspective
I
do
want
to
show
this,
which
is.
A
A
Permissions
are
for
things
at
different
versions.
You
should
definitely
check
this
out
and
they're
broken
up
into
different
categories.
This
test
data
represents
the
effective
permissions
that
will
be
applied
to
your
cluster
right,
so
if
you're
gonna,
if
you
wanted
to
understand
when
I,
was
just
referring
to
earlier
right.
So
if
we
look
at
the
controller
roles-
and
we
do
a
search
for.
A
A
We
do
a
control
if
patroller,
so
there
is
the
difference
right
so
in
115,
the
permission
set
for
the
for
the
system
controller
with
his
system
controller
cluster
role,
aggregation
controller
was
defined
here
as
all
stars,
so
complete
permission
over
everything,
whereas
in
the
more
recent
version
since
the
patch.
What
is
what
this
has
been
constrained
to
only
give
those
permissions
that
the
role
actually
requires
to
do
its
job
all
right,
our
back
one
only
left,
sir
anything
else.
I
want
to
talk
about
only
our
perspective.
Obviously
our
back
is
pretty
pervasive.
A
It
provides
a
lot
of
capability.
The
only
other
thing
I
would
like
to
point
out.
Is
this
other
tool
called
audit
to
our
back,
which
I
think
Joe
covered
and
if
you
didn't
probably
well,
if
there's
the
thing
written
by
it,
Liggett
Jordan
Liggett,
who
works
at
Google,
now
really
great
guy,
like
truly
an
awesome
individual.
A
So
this
tool
basically
allows
you
to
do
a
thing
where
you
can
audit
using
the
kubernetes
audit
capability,
the
permissions
that
our
particular
entity
are
going
uses
and
then
build
an
our
back
policy
based
on
them,
which
is
actually
pretty
neat.
So
definitely
check
that
out,
if
that's
something
your
rest
of
it,
let's
get
down
to
it
here,.
A
So
admission
controllers
and
we're
gonna
talk
about
kind
of
the
generic
admission
controllers
that
are
out
there
and
then
we'll
probably
dig
into
a
little
bit
more
about
some
of
the
stuff
that
we're
gonna
come
to.
But
before
just
to
kind
of
frame.
The
discussion
I
want
to
talk
a
little
bit
about
kind
of
where
this
is
a
little
bit
of
a
history
lesson
on
the
kubernetes
side
of
things
right
so.
A
Initially,
the
approach
to
defining
admission
controllers
was
to
actually
define
those
controllers
individually,
as
we
found
value
in
what
they
were
being
defined
as
and
we
can
like
go
down
here
and
see
kind
of
the
list
of
plugins
that
are
enabled
by
default.
So,
like
we
built
the
name
same
plant,
the
name,
space,
lifecycle,
good
admission
controller
or
the
limit
Ranger
service
account,
there's
a
bunch
of
other
controllers
that
are
running
and
these
all
run
in
the
controller
manager
right.
A
So
as
you
as
you,
as
you
enable
or
disable
these,
these
are
going
to
run
inside
the
controller
manager
they're
built
in
admission
controllers,
and
this
was
the
pattern
that
we
developed
inside
of
the
community
for
managing
admission
over
time,
and
so,
as
time
went
by,
we
created
more
of
them.
We
deprecated
some,
etc,
etc.
A
But
in
that
process
we
realized,
you
know
what
this
should
probably
be,
something
that's
a
bit
more
generic,
like
there's,
no
reason
per
se
that
everything
that
somebody
would
want
to
do
at
admission
should
be
defined
in
something
that
is
gonna,
be
run
by
the
controller
manager
per
se.
Right.
Lexi,
no
reason
to
actually
have
that
code
be
constrained
to
the
core
code
base
that
could
reduce
provides.
B
A
Not
then
I'll
probably
do
one
after
I
get
done
with
this
whole
series,
but
dynamic
admission
control
lets
you
define
what
that
controller
logic
looks
like,
and
what
and
what
business
logic
is
important
to
you.
Some
examples
of
dynamic
admission
control
are
things
like
Oh
pas
gatekeeper,
which
is
a
a
great
project.
That's
out
there
looking
to
kind
of
replace
PSPs
and
some
of
the
other
kind
of
dynamic
admission
control,
something
you
might
concern
yourself
with,
but
for
the
longest
time
we
didn't
have
dynamic
admission
control,
I
can't
actually
remember
when
it
was
introduced.
A
At
this
point
yeah
it
looks
like
116.
It
was
introduced,
Oh,
No,
1
9
was
when
beta
and
I'm
not
sure
when
it
was
alpha
but
been
around
a
while
now,
but
now
I
think
it's
stable.
It's
actually
b1
best
of
116,
so
that's
exciting,
but
back
to
admission
controllers,
because
there's
still
a
thing.
So,
even
though
we
define
them
and
we
think
that
there's
a
better
model
for
that
using
dynamic
admission
control
there
still,
there
are
still
a
number
of
admission
controllers
that
are
gonna,
be
brought
on
by
default
or
run
by
default.
A
Within
your
API
server
and
it's
a
little
tricky,
sometimes
to
determine
what
admission
controllers
are
enabled
for
a
given
cluster
by
default.
You
can
really
you
can
see
this
in
the
log
file
for
the
API
server
when
it
starts
up,
but
there's
no
endpoints
of
the
API
server
expresses
that
allows
you
to
pull
it
and
say
what
admission
controllers
are
running.
So
you
can
do
that
by
looking
at
the
default
set
as
long
and
the
configuration
that
you
have
for
the
cluster
right.
A
A
What
I'm
looking
for
here
is
the
way
that
I
thought
that
we
were
by
default,
enabling
some
node
authentication-
oh
that's
not
by
default,
no
okay,
but
there
was
a
way
that
we
were
actually
enabling
the
node
authentication
authorizer,
but
I
don't
see
that
here.
No,
that's
an
occasional
restriction
can
also
check
the
API.
A
So
this
configuration
there's
the
admission
plugins
line
right,
so
this
is
the
enable
admission
plugins
and
this
is
going
to
add
the
node
restriction
plug-in
to
the
default
set.
If
you
wanted
to
remove
one
of
the
default
admission,
control
plugins,
you
would
have
to
use
the
disable
admission
plugins
field
right,
but
by
default.
This
is
a
thank
you,
that's
what
that's
what
I
was
looking
for?
Is
it
suresh
that
was
awesome
is
that
you
can
actually
see
that
we
are
adding
the
admission
control,
node
restriction,
and
we
talked
about
why
in
episode,
99.
B
A
We
can
see
the
default
set
for
the
API
server
and
you
can
actually
grep
for
this
and
some
of
the
api
server
itself.
So
if
you
want,
if
you
have
the
the
binary,
you
can
just
dump
the
output
of
help
and
grip
for
enable
admission
control
plugins
and
they
are
automatically
updated
in
the
documentation.
So
these
are
the
default
set
and
we
can
see
which
ones
are
there
and
we
can
also
determine
based
on
the
enable
configuration
flagged
which
ones
we're
adding
to
it.
So
node
restriction
isn't
on
by
default.
A
A
So
admission
control
and
we
talked
about
built-ins.
Some
of
the
other
admission
controllers
that
are
important
are
things
like
pod
security
policies.
There
was
a
recent
episode
on
pod
security
policies,
I'm
not
going
to
look
into
it
today,
but
we
do
cover
like
what's
happening
there.
The
next
thing
I
want
to
get
into
I
think
is
wait.
No,
it's
two
o'clock,
so
I
think
we're
still
doing
okay
on
time.
A
Next
thing
I
want
to
get
into
is
exploring
the
API
and
understanding
how
we
can
go
about
this
right,
and
so
the
next
thing
we're
going
to
talk
about
is
like
the
API
server
itself
like
how
do
we
actually
see
these
things?
How
can
we
interact
with
that
API
and
then
we
can
kind
of
start
playing
with
stuff
from
that
perspective.
So
let's
jump
into
that
next.
A
A
So
the
communities
API
is
laid
out
by
or
explained
by
cue
pill
explained
pretty
well
so
like
if
we
were
to
top
the
stuff
less.
For
example,
we
can
see
a
lot
of
information
about
all
the
fields
with
regard
to
the
body
of
the
pod.
Spec
object
right
here
inside
of
inside
of
the
explain,
output
right.
So
if
I
did
it,
coop
it'll
explain
pod
spec
within
the
and
then
pipe
that
to
list,
then
we
can
actually
see
what's
going
to
be
defined,
underneath
the
spec
object
inside
of
the
pod.
A
Object
within
the
kubernetes
api
and
what
fields
we
have
and
what
the
defaults
are
and
whether
they
are
required
or
not,
but
there's
a
ton
of
information
here.
That
is
the
describe
like
all
the
different
things
that
we
can
define
at
a
pod
level
for
the
entire
cluster,
and
so,
if
you're
looking
for
a
way
to
understand
some
particular
feature
or
flag
or
our
or
a
field
that
it's
been
defined
within
it.
A
particular
pot
like
you're,
saying
like
why?
What
does
that?
Do
you
know?
A
But
yeah
as
the
API
continues
to
grow
as
think
as
features
and
and
things
like
come
along
overhead
as
anyone.
Actually
it's
alpha
level
in
116,
but
overhead
is
now
like
a
relatively
new
feature,
but
it's
already
documented
in
the
inside
of
the
coop
it'll.
Explain
now,
but
I
mean
so
a
ton
of
this
stuff.
Some
of
these
are
beta
features.
Some
of
these
are
alpha
features,
but
like
it
tries
to
do
it,
we
try
to
do
our
best
to
make
sure
that
all
of
the
things
are
covered
within
the
prospects.
A
So,
if
you're
exploring
that
object,
that's
one
way
to
do
it
and
if
we
looked
at
just
pod,
for
example,
we'd
be
able
to
see
the
entire
context
right.
So
we're
expecting
to
see
pod,
it's
going
to
be
version.
V1,
we're
going
to
see
the
API
version
defined
and
this
is
gonna,
be
v1
has
to
find
up
at
the
top
we're
gonna
see.
A
The
kind
is
obviously
a
pod,
and
then
we
have
our
metadata,
our
spec
in
our
status
fields,
and
we
explained
what
those
things
are,
the
ones
that
you're
going
to
be
concerned
with
are
going
to
be
metadata
and
spec.
This
is
where
you
populate
the
information
about
a
particular
pod.
If
we
take
the
look,
if
we
took
a
look
at
one.
A
So
here's
my
my
little
bash
image,
and
so
what
I
was
using
there
to
create
the
pod
was
actually
just
the
computer
run
command
you
could
it
also?
You
could
also
use
computer
create
and
it's
allowing
me
to
kind
of
cheat
I
don't
have
to
populate
every
field
in
the
spec.
All
I
really
have
to
populate
is
the
image
name
right
and
nothing
else,
which
is
great.
So
if
I
do
cute
kennel
get
pod?
A
A
Given
to
it
was
all
that
information
is
populated
by
kubernetes
itself,
but
above
it
there
are
a
bunch
of
defaults
that
are
defined
right,
so
we
have
metadata
by
default.
When
I
do
a
cube
cannot
run
it's
going
to
populate
this
label
set
and
then
it's
going
to
go
ahead
and
generate
a
name
because
it's
part
of
a
deployment,
so
that
name
is
going
to
be
generated
by
the
deployment
controller
or
actually
by
the
replica
set
controller
and
then
define
the
namespace
to
find
the
owner
reference.
A
This
was
actually
deployed
as
part
of
a
deployment,
so
we
can
see.
There's
a
current
owner
of
this
pod
was
a
replica
set.
If
we
looked
at
the
owner
of
reference
for
that
replica
site,
we'd
be
able
to
see
that
it
was
owned
by
a
deployment,
etc,
and
then
down
here.
This
is
actually
where
we
defined
our
image
and
then
the
rest
of
these
fields
are
just
populated
by
default.
A
When
I
use
that
cube
kettle
run
capability
right,
so
I
specified
I
wanted
a
std
in
true
and
I
had
specified
TTY
true
when
I
did
I
did
IT
that's
what
these
things
are
doing
for
me,
but
the
service
account
that
was
mounted
in
automatically.
That
was
all
done
by
kubernetes.
It's
when
defining
the
pod
genus
policy.
These
are
the
defaults
for
things
right,
whether
we
actually
enabled
service,
linguist
or
DNS
policy.
The
node
name
was
actually
populated
by
the
scheduler,
but
I
can
populate
it
as
well.
It's
within
the
prospect.
A
There
they're,
actually
a
number
of
things
out
there
to
solve
this
problem.
I'm
curious
how
this
wouldn't
differs
some
of
the
other
ones
that
are
out
there
are
Cube
builder,
the
operator,
the
control.
The
cube
builder
supports
the
ability
to
define
things
like
admission
control
as
well
breaking
them
up
across
the
different
web
books.
We
have
a
validating
a
mutating
need.
A
A
A
Some
of
these
make
sense
right.
You
have
our
back
authorization,
your
policy,
no,
you
know
no
decades
to
I/o.
You
have
a
bunch
of
other
information
that
are
kind
of
defined
at
that
layer.
But
what
I
wanted
to
point
out
in
this
output
is
actually
open,
open,
API
v2,
which
is
the
open
API
spec
for
the
entire
cluster
I'm,
going
to
show
it
to
you
and
then
I'll
actually
show
you
another
way
to
actually
interact
with
it,
which
will
be
kind
of
nice.
If
it'll
get.
A
A
If
you
do
it
yet,
it'll
consume
it'll
consume
input
in
the
form
of
JSON
in
the
form
of
a
llamo
in
the
form
of
a
poo
Brandes
protobuf,
the
description
of
information
to
actually
get
the
operation
ID
and
then
what
it
will
produce.
It
will
produce
the
output
of
either
json,
yellow
or
kubernetes
brutaloff,
depending
on
what
us
for
it
to
return.
The
responses
that
you
might
get
back
from
this
call
are
200,
which
means
that
it
worked
or
a
401,
which
means
you're
not
authorized
the
schemes.
A
I
might
have
removed
it
I
think
I
did
so.
There
are
plugins
for
your
for
your
browser
that
allow
you
to
hide.
You
be
good
to
see
you
and
I'll
talk
about
those
in
just
a
minute
Suresh,
so
there
are
plugins
for
your
browser
that
would
allow
you
to
explore
the
open
API
document
as
long
as
you
point
to
one
right,
so
there
are
definitely
tools
and
I.
Think.
Let's
just
look
this
up
real,
quick,
because
I
remember
this
being
super
valuable
for
me,
so
might
be
valuable
for
you.
A
It's
not
working
fun
stuff.
When
plugging
the
door
go
anyway,
you
can
grab
a
plug-in
that
will
actually
allow
you
to
work
with
swagger,
and
it
will
actually
give
you
a
really
good,
like
interactive
view
of
how
the
how
the
system
works.
Unfortunately,
this
isn't
working
for
me,
but
you
get
the
idea
like
if
you
actually
had
a
tool
that
actually
allows
you
to
interact
with
those
things.
A
A
Anyway,
so
that
is
another
way
to
explore
the
API,
which
is
a
pretty
neat
the
you
can
interact
with
the
API
from
that
perspective,
but
there's
some
other
things.
I
wanted
to
show
you
real
quick.
So
let's
do
if
you're
trying
to
understand
how
the
API
works,
because
you
want
to
program
it
there's
some
really
great
degreed
tools
very
similar
to
the
one
that
I
showed
you
earlier
with
the
curl
command
right.
So
if
I
were
to
do
get
pods
watch
v10.
A
So,
for
example,
this
is
a
API
call
that
is
usable.
I
can
actually
do
this
with
curl
now
right.
So
if
we
did
that
directly,
we'd
be
able
to
see
pots
changing.
So
if
I
were
to
split
this
horizontally
and
cube,
get
a
little
delete
pod
right,
we
can
actually
see
the
watch
happening
up
here
at
this
particular
layer.
A
B
A
A
A
A
So
this
is
the
request
body
that
was
encoded
up
into
that
was
sent
right,
and
so
we
can
see
that
it's
actually
just
JSON
and
it's
in
the
form
of
a
JSON
deployment,
spec
specification
and
only
contains
everything
that
we
had
inside
of
our
actual
deployment
specification.
So
we
shipped
a
bunch
of
this
stuff
out
we'd
be
able
to
see
that
what
was
being
a
sent
was
a
significantly
less,
and
so
this
is
the
content
that
we,
this
is
the
way
that
we
can
interact
with
the
API
server.
We
don't
necessarily
have
to
use.
A
A
What
they
thinks
allow
us
to
do
is
provide
kind
of
a
structured
text
that
allows
us
to
define
those
fields
that
are
necessary
for
a
particular
for
these
particular
objects.
In
this
case,
a
deployment
object
right.
This
just
gives
us
a
way
to
define
those
those
fields
that
are
necessary
to
define
what
we
want
declaratively.
What
we
want
the
cluster
to
do
with
this
information,
all
right
like
what
are
the
fields
that
we
need.
What
is
the
image
that
we're
going
to
use?
What
is
the
pull
policy?
A
B
A
A
What
this
allows
us
to
do
is
see
those
resources
at
our
namespace
or
globally
available,
and
you
can
see
the
name
of
them
and
the
neat
thing
about
qbq
petal
api
resources
as
I
it
will
actually
enumerate
all
the
things,
not
just
the
ones
that
are
core
to
the
cluster.
But
if
you
add
another,
a
CR
D
or
something
like
that,
you're
going
to
be
able
to
see
those
resources
that
are
defined
by
CR
DS
inside
of
the
API
resources.
A
And
so,
if
you
have
the
ability
to
see
API
resources,
you're
able
to
enumerate
all
of
the
things
that
can
be
defined
within
the
cluster.
If
we
had
to
find
another,
if
we
had
to
find
a
operator
that,
had
you
know,
MySQL
databases
or
whatever,
as
an
object,
we'd
be
able
to
see
the
output
here,
which
is
actually
pretty
neat
duplicate.
And
then,
if
we
look
at
computer
API
versions,
this
is
the
other
one.
That's
neat
is
that
we
can
actually
see
the
versions
of
things
that
are
registered
now.
This
is
what
bit
me
earlier.
A
You
can
see
that
there
are
two
batches
right:
there's
battery
one
and
battery
1
beta
1,
and
also
I
want
to
point
out
that
q'l
api
of
resources,
as
I
pointed
out
before
right.
These
things
are
going
to
be
defined
with
regard
to
your
cluster,
so
anything
you
have
defined
within
your
cluster
you're
gonna
see
them
here
anything
that
this
NEC
Rd
has
been
registered.
Anything
else
like
that
this
output
would
be
tailored
to
your
cluster.
A
It's
not
going
to
be
generic
reading
off
the
API
spec
that
kubernetes
comes
with
it's
actually
interacting
with
your
API
server
or
interning
values
from
it.
Last
thing,
I
want
to
see
about
that.
So
one
of
the
things
that
bit
me
recently
was
I
was
trying
to
remember
what
the
version
was
for.
Cron
jobs
could
I'll
explain.
This
is
what
I
should
have
done,
not
what
I
did.
A
And
up
here
at
the
top
I
can
see
the
version
for
cron
job.
That's
defined
within
this
particular
cluster
is
batch.
P
1
beta
1,
once
it
registered
once
it
graduates.
You'll,
see
the
value
of
this
inside
of
cubicle
explained
change
to
batch
p12
bachelor,
u
1,
but
it
hasn't
actually
done
that.
So
what
got
me
was
I
was
trying
to
define
the
a
cron
job
and
I
defined
it
as
battery
1,
because
I
thought-
oh,
it's
probably
stable
by
now,
and
it
did
not
match.
A
So
there
are
ways
that
we
can
interact
with
the
API
server
to
determine
that
we
can
use
QP
I'll,
explain
simplest
to
see
the
matching
between
the
kind
and
the
version
right.
So
even
though
we
can
determine
all
the
resources,
the
resources
don't
show
us
a
mapping
to
version
and
we
can
determine
the
inversions,
but
they
don't
show
us
a
mapping
to
resources.
How
do
we
actually
get
that
mapping?
You
know
a
way.
That's
viewable.
The
easiest
way
is
to
do
it
with
cubic
it'll.
A
A
So
we
can
see
we
can
interact
with
the
with
the
API
directly
right,
keep
it'll
get
raw
api's
batch
pumping
this
to
json,
just
to
give
us
a
pretty
output
and
then
we're
gonna
path,
we're
gonna
patch
and
see
what's
underneath
v1
beta
1,
and
we
can
see
that
cron
job
is
defined
there
all
right.
So
the
the
resources
that
are
defined
that
underneath
V
1
beta
1
are
chronograph
cron
job,
the
kind
cron
job
and
kind
cron
jobs
and
underneath
v1
we
have
regular
jobs.
A
A
B
A
Exploring
the
API
we
talked
about
you
pill
explain.
We
talked
about
open,
API,
stuff,
ooh,
deprecation,
Docs.
We've
also
talked
about
different
fields:
b1
beta
1,
V,
1,
V,
1,
the
fact
that
things
have
moved
around
within
the
API.
We
used
to
have
extensions
pass
that
is
no
longer
available
after
116,
but
it
used
to
be
used
to
exist
deprecating
part
of
the
API.
A
A
That's
for
one
deployment!
Now
those
sorts
of
things
we
talked
about
that
in
a
previous
episode
as
well,
but
if
you
want
to
understand
more
about
how
deprecation
works
and
what
that
capability
is
definitely
check
us
out,
it's
definitely
explained
like
what
the
expectations
are.
This
gets
into
the
contract
around
it.
A
A
We
talked
a
little
bit
about
some
of
this
off
our
in
in
this
episode
feel
free
to
read
too
episode
to
explore
more
one
of
the
big
ones
that
I
didn't
talk
about
was
that,
even
though
there's
a
resource
type
defined
at
the
as
a
global
level,
there
are
also
resources,
specific
resources
that
are
defined
within
a
namespace
level
right.
So,
if
you
wanted
to
understand
like
what
pods
are
actually
deployed
within
the
cluster
at
this
level,
we
can
actually
pull
up
this
layer.
So
pretty
neat
stuff.
A
So
here
we
can
see
the
definition
of
those
resources
that
are
defined
right,
so
we
can
see
that
there
is
pods
and
those
sorts
of
things
and
sided
here.
We
can
see
config
map
it
takes
these
verbs.
These
are,
this
is
a
short
name
for
it
cm.
You
can
see
those
sorts
of
things
that
are
that
are
defined
here.
A
A
A
A
So
how
those
things
are
how
those
things
are
represented
both
in
a
namespace
for
you
in
a
global
view,
it's
all
gonna
be
with
it's
all
going
to
be
interacting
with
the
API
itself.
You
have
your
watch
bookmarks
the
ability
to
actually
watch
things.
You
have
the
abilities
now
to
limit
things
is
a
relatively
recent
ability,
so
you
can.
A
So
if
you
had
thousands
of
pods-
and
you
were
interacting
with
it
using
the
curl
or
one
of
the
frameworks
like
Python
or
go
or
one
of
those
things-
you're
not
gonna,
get
you're
not
going
to
sit
behind
and
wait
for
everything
to
respond.
You're
gonna
get
chunked
for
you
know
by
default,
which
is
a
good
thing.
It's
really
helpful
for
things
like
dashboards,
for
example,
receiving
views
as
tables
you
can
specify
time
table
altered,
alternate
representations
of
resources.
You
want
to
see
it
as
a
protobuf.
A
A
Server-Side
oflag
gives
us
the
ability
to
move
a
lot
of
the
logic
that
was
in
the
apply
capability
of
the
GPL
client
up
to
the
server
side.
So
now,
when
you're,
now
that,
when
you're
deploying
when
you're
creating
things
using
an
API,
there
used
to
be
a
lot
of
magic
that
we
did
inside
of
cube
Channel
on
the
client
side
to
manipulate
resources
in
such
a
way
that
they
would
just
work.
A
A
This
is
actually
one
of
the
biggest
challenges.
Was
that,
like
a
framework
that
was
making
use
of,
you
know
deployments
or
objects
like
that,
sometimes
for
a
while,
when
you
defined
it,
when
you
would
define
a
thing,
the
that
keep
that
a
client
would
actually
make
a
bunch
of
assumptions
about
what
needed
to
be
in
there
and
it
would
go
ahead
and
populate
those
things
at
the
client
side
before
actually
submitting
it
to
the
API
server.
A
And
obviously
you
can
see
there's
a
real
challenge
there,
because
now
anybody
else
is
gonna
interact
with
the
API
server
directly
is
expecting
those
things
to
continue
to
work,
but
they,
but
they
don't
know
that,
for
those
things
to
continue
to
work,
they
have
to
actually
implement
that
magic
themselves.
Now,
which
is
the
problem
so
server-side
applied
clean,
set
up
quite
a
lot,
which
is
good.
A
A
We're
probably
going
to
beat
up
the
API
server
for
one
more
episode
and
we'll
be
it'll,
be
the
next
episode
I
do
unless
I
decide
to
do
something
else,
as
always,
if
there's
something
more
that
you
want
to
explore,
or
some
other
piece
of
information
or
some
new
projects
as
exciting
that
you
want
to
see
covered,
make
sure
you
go,
you
are
you
can
just
drop
a
note
there,
TTI
Kade,
actually
it's
gonna
be
I
should
actually
link.
That's
a
good
point.
A
So
inside
of
TGI
K
there's
gonna
be
issues,
and
you
can
just
open
an
issue
if
you
see
one
that
you
want
to
see
covered
or
want
to
see,
recovered
or
anything
else
like
that.
This
is
where
we
can.
Actually,
this
is
where
we
get
a
lot
of
our
ideas
for
episodes.
You
can
also
reach
out
to
me
on
Twitter
and
ask
me
questions
that
are
act
with
me
there
or
on
the
kubernetes
slack.
If
you
have
questions
so
thanks
for
all
your
time
today,
that
was
a
great
episode.