►
From YouTube: TGI Kubernetes 044: Knative serverless
Description
Show notes available at: https://github.com/heptio/tgik/blob/master/episodes/044/README.md
A
Hello
and
welcome
everybody
to
TGI
K
in
quite
a
while,
so
a
huge
thank
you
to
Chris
for
holding
down
the
fort
well
I've
been
so
busy
with
other
things.
Well,
we
get
started
so
first
of
all,
today
is
July
27th,
and
the
plan
today
is
to
go
over
K
native
something
new
that
was
announced
at
GCP
next
and
yeah,
and
so
we're
gonna
dig
into
that.
A
I
haven't
played
with
it
before
I've
done
a
little
bit
of
reading
and
pre-gaming
just
to
make
sure
I'm,
not
gonna
immediately
run
into
any
roadblocks,
but
that's
what
we're
gonna
be
doing.
So
for
those
who
don't
know,
my
name
is
Joe
Beda
I
am
the
CTO
and
founder
of
a
start-up
called
hep
tio
we're
working
on
bringing
kubernetes
and
cloud
native
technologies
in
general
to
a
wider
enterprise
ecosystem,
and
so
I
do
either
me
or
Chris
or
we're
talking
about
bringing
some
other
folks
on
board.
A
We
do
a
trying
to
a
screencast
every
Friday
called
tji
kubernetes.
That's
what
you're
watching
where
we
cover
new
technologies
in
the
kubernetes
world,
go
over
and
just
play
with
things.
Sometimes
we
do
some
live
development.
The
idea
being
is
to
get
an
idea
of
watching
us
play
with
these
technologies
understand
them.
We
learn
about
things
together.
A
So
what
I
love
to
do
is
I
start
off
is
say
hi
to
everybody,
who's
been
joining
me,
so
hopefully
y'all
can
hear
me
and
and
and
I'm
coming
through
loud
and
clear
here,
it's
great
to
see
so
many
folks
actually
joining
and
then
I'll
spend
some
time
going
over
sort
of.
What's
happened
over
the
last
week's
hit,
some
of
the
highlights
and
then
we're
gonna
jump
in.
A
A
So
thanks
for
tuning
in
from
all
these
disparate
places,
let's
see
we
have
suresh
from
hamburg,
matthew
and
bristol
chris
in
detroit
how's
it
going
man
Marko
is
in
the
Netherlands
joy
in
Richmond,
Lee's
Ianno
is
saying
happy
sysadmin
day
everyone
so
I
eat
in
London,
Ralph
how's
it
going
Ralph
long
time.
Man
George
is
actually
signed
in
as
the
hep
tio
account,
so
he
can
help
facilitate,
take
notes
and
stuff
the
you
can't
post
links
into
the
into
the
the
comments
here.
A
Let's
see
so
Vishwanath
from
hamburg
l'm
adi
good
to
see
you
we're
still
enjoying
that
honey
that
you
sent
keith
from
scotland
rory
from
luck,
go
ahead,
I'm
sure
a
butchery
to
IDI
gift
from
Portland,
Peter
and
Santa.
We
had
there's
too
many
for
me
to
go.
This
is
awesome.
Thank
you.
Everybody
for
joining
me.
I!
Don't
want
to
spend
too
much
time
but
like
wow,
Chile,
Mariano
and
Barcelona,
a
Dylan
Stuttgart,
so
yeah,
awesome,
alright,
and
hopefully
some
of
the
folks
from
that
have
been
working
on.
A
K
native
have
been
able
to
join
us
also,
and
they
can
help
to
make
sure
that
if
we
go
off
the
rails,
we
don't
go
too
far
off
the
rails
and
so
Josh
from
London
good
to
see
you
and
so
yeah.
So
with
that,
let's
just
go
ahead
and
jump
in.
A
So
one
of
the
things
we've
been
playing
around
with
this
stuff
and
I'm
going
to
switch
to
my
screen
here
is
we've
been
like
taking
notes
as
we
go.
George
is
going
to
edit
this
stuff,
and
so
this
is
what
I've
been
using
to
keep
track
of
what's
happening,
and
then
we're
going
to
check
this
in
into
a
github
repo
after
the
fact-
and
you
can
sorta
see
George
taking
notes
as
we
go
here.
The
first
thing
that
I
want
to
cover
here
and
let's
see,
can
I
just
click
on
this.
Does
that
work?
A
No,
oh
I'll!
Do
this
look
at
that.
We
can
do
like
the
split
preview.
Here
is
kubernetes
one,
the
2018
oz
con
most
impact
award,
and
so
this
is
really
really
interesting
and
exciting.
So
Kelsey
reached
out
to
me
little
bit
before
I
wasn't
able
to
make
it
to
Oz
Khan,
saying
that
they
were
gonna.
Do
this
they're
able
to
get
a
lot
of
folks
on
on
stage
I?
Think
Brendan
was
there
Brian
and
Tim?
Who
you
know
were
super
early
with
the
project?
Were
there
and
you
know
that
feels
really
good
too.
A
So
you
know
see
everybody
recognized
the
the
community
and
the
impact
that
everybody
has
had
there.
One
of
the
things
and
I
haven't
heard
an
answer,
and
maybe
it's
in
here
is
that
there
was
a
physical
award
and
I'm
wondering
where
and
how
that
award
ends
up
right.
Cuz,
like
you
know,
the
community
doesn't
have
an
office
or
a
trophy
case,
so
I'm
not
quite
sure
what
happened
there
I'd
be
really
interested
to
hear
about
that
so
yeah.
So
that's
really
exciting.
A
I
haven't
had
a
chance
tonight.
We
should
do
a
TGI
Kay
on
this,
but
the
kubernetes
is
now
part
of
docker
desktop
and
so
its
installed
in
general.
It's
a
pretty
big
download,
but
that
actually
is
really
interesting
too
yeah
slice
the
award
up
in
tiny
pieces.
We
need
to
shard
it
and
then
send
it
out.
Everybody
gets
something
you
know
so
there's
this
cool
thing
and
I
bought
every
single
one
of
these.
It's
called
the
mini
Museum
and
it's
a
Kickstarter
I
think
they're
doing
a
Kickstarter
now
and
what
is
it?
It's?
A
It's
a
block
of
lucite
with
these
itty
bitty
samples
of
like
cool
stuff.
So
it's
like
a
mini
museum
that
you
can
hold
and
then
each
of
these
things
these
things
aren't
cheap,
but
they're
really
cool.
And
so
then
you
can
actually
look
at
these
things
in
3d.
So
what
we
should
do
is
do
like
one
of
those
with
the
award
where
we
create
little
slices
of
these
things
and
ship
them
out
to
everybody
all
right.
Oh
yeah,
like
the
Stanley
Cup
or
it
gets
passed
around.
A
It
can
do
like
a
world
tour
that'd,
be
fun
all
right,
so
docker
desktop
I.
Haven't
had
a
chance
to
play
with
this
I've
been
using
docker
sort
of
just
in
the
sort
of
pure
docker
client
form.
I
know
that
those
folks
have
been
hard
at
work
in
creating
a
desktop
experience
around
this.
My
understand
is:
this
is
a
little
bit
of
a
sort
of
mini
cube.
Alternative
it'd
be
interesting
to
maybe
take
some
time
at
some
point
and
do
a
little
bit
of
you
know
of
comparison.
Compare
and
contrast
with
the
docker
desktop
stuff.
A
What's
he
got
chopping
up
a
distributed
system
exactly
and
then
okay
and
then
stuff
announced
at
Google
next
I
think
there's
two
solo
Manny
says:
okay,
so
congrats
on
the
three-year
anniversary,
and
then
do
you
have
a
favorite
moment
during
development
of
kubernetes
that
you
can
share?
Okay,
the
CNC
F
has
the
award
I'm,
not
sure
about
that
yeah
Paris
is
saying
that
they
were
holding
on
to
it.
A
Does
anybody
in
the
city
I
don't
know,
but
I
guess
that's,
probably
as
good
a
place
as
any
so
two
things
that
I
think
are
really
interesting
to
folks
coming
out
of
Google
next
is
there's
GE
gke
on
pram.
This
isn't
a
closed
alpha
right
now,
and
this
is
the
idea
of
using
a
lot
of
the
techniques
and
management
that
goes
in
GG
ke
for
actually
managing
stuff.
That's
running
on
premise:
this
out
the
gate.
This
is
going
to
be
running
on
top
of
VMware.
A
So
it's
still
there's
this
assumption
that
you're
running
on
top
of
a
that
you're
running
on
top
of
something
that
can
be
automated.
So
it's
not
really
bare
metal.
It
is
on-prem.
So
there's
early
access
going
on
there,
but
I
think
the
idea
is
to
provide
a
consistent
experience
and
management
across
these
things.
They're
talking
about
managing
both
kubernetes
and
sto.
On
top
of
on
top
of
this,
so
that's
really
interesting
and
integrating
with
things
like,
like
stack,
stack
driver
and
such
in
the
console
so
yeah.
A
A
But
but
I
know
they
had
some
sessions
on
that
where
they
go
into
a
little
bit
more
detail
and
yeah.
So
that's
super
cool
and
so
yeah,
so
Philippe
from
Paris
and
then
Matt
Moore,
who
was
working
on
the
key
native
team,
says
he
solo
from
cane
native.
He
is
just
distributed.
He
is
truly
cloud
native.
He
is
living
in
the
cloud
thanks
for
joining
us,
Matt
all
right
and
then
we're
gonna
talk
about
key
native
today,
so
we'll
go
into
a
lot
of
their.
That
was
a
big
deal
at
GCP.
A
Next
also,
they
announced
a
whole
bunch
of
other
stuff
too,
but
but
I
haven't.
You
know
like
again:
I
wasn't
there
so
I
wasn't
able
to
actually
keep
up
with
all
the
announcements
and
then
the
final
announcement
end,
and
is
that
the
CFP?
So
this
is
call
for
papers
or
proposals
like
if
you
want
to
talk
at
cube
con
here
in
Seattle
in
December,
you
got
to
get
a
proposal
in
by
December
or
by
August.
A
12Th
is
when
the
CFP
closes,
and-
and
if
you
have
experience
in
any
shape
with
with
kubernetes
I,
would
really
encourage
you
to
to
submit
your
a
proposal.
If
you
have
something
interesting
to
talk
about,
you
don't
need
to
be
an
expert
here.
I
think
your
experiences.
What
worked,
what
didn't
work
I
mean
being
able
to
sort
of
share
your
learnings
wherever
you're
at
in
the
kubernetes
journey
is,
is,
is
I,
think
interesting
for
folks
and
so
I
think
one
of
the
ones
ad
cube
con
tu.
A
That
I
think
a
lot
of
folks
got
a
lot
out
of
is
Kate
who's.
One
of
our
engineers
here,
who
primarily
does
a
lot
of
front-end
stuff,
was
brand
new
to
kubernetes.
She
Sarah
shared
her
experience
coming
from
sort
of
this
friend
mindset
into
the
kubernetes
world,
where
it
was
intimidating
the
resources
that
she
used
and
how
she
started
coming
up
to
speed
on
all
this
stuff,
and
so
I
think
you
know.
A
Even
if
you
don't
have
that
deep
expertise,
we
really
want
all
sorts
of
different
points
of
view
when
it
comes
to
cube
con
all
right.
Let
me
catch
up
on
the
comments
here
so
nadir
in
London.
Good,
to
see
you
Dan
from
sis
dig
I,
don't
know
where
Cystic
is.
Is
that
a
city
no
I,
think
it's
probably
San
Francisco
I'm,
assuming
Sean,
submitted
your
CACFP
awesome
and
then
Pablo
says
what
do
you
think
of
que
native
plus
Kong,
for
example,
or
next
we're
gonna
get
into
that
a
little
bit.
I.
A
Okay
Adobe
in
New
York,
say
so
the
complex
split,
so
so
so
fun
fact.
The
Google
office
here
in
Seattle
in
Fremont,
which
is
not
a
town
in
California
it's
a
neighborhood
in
Seattle,
actually
is
right
next
to
the
Adobe
offices
here,
and
they
have
this
beautiful
courtyard.
That
I
was
always
waiting
for
Adobe
to
move
out
so
that
we
could
actually
move
into
that
and
get
that
coat
courtyard.
But
I
don't
think
it's
actually
happened
yet
so
sap
tech
from
Seattle
and
working
for
AWS,
awesome
and
then
Jonathan
from
sa
P
in
Vancouver.
A
Yes,
Fremont
is
the
center
of
the
universe,
exactly
all
right.
So
that
is
the
announcements
here
of
what
we're
going
through
and
oh
and
then
George
pulled
up
a
video
of
Kate's
talk.
If
you
want
to
get
that
idea
of,
like
you
know
any
wherever
you
are
and
your
journey,
that's
in
the
notes
here:
alright!
So
yes,
and
thanks
to
all
the
K
native,
how
do
we
pronounce
K
native?
We
need
Chris
here
to
help
us
come
up
with
the
it
is
the
K
silent
as
caƱedo
not
have
a
kanay
TV.
A
A
So
let's
go
ahead
and
get
started
so
we're
going
to
start
with
the
homepage
for
K
native
and
I.
Think
one
of
the
things
that
actually
I
had
to
go
through
as
I
was
wrapping
my
head
around.
What's
really
happening
here
as
I
think
a
lot
of
the
hype
around
this
a
lot
of
the
way
that
people
have
been
talking
about
it.
The
marketing
is
really
diverged
from
the
from
the
underlying
implementation
and
I
think
you
know
I
think
I
want
to
really
dig
into
and
sort
of
look
at
that
difference
and
understand.
A
A
And
so
the
first
thing
here
is
that
it's
K
native
and
it's
built
on
kubernetes
and
sto,
and
it's
part
of
Google
cloud,
so
I'm,
looking
at
sort
of
the
title
here
in
the
Antwan
from
Paris
good
to
see
you
I,
you
know,
and
so
I
think
this
is
interesting.
You
you
look
at
this
and
I
like
I
saw
this
come
up
and
I'm,
like
is
the
deadline
that
this
is
built
on
kubernetes
and
sto.
Where
is
the
headline
that
it's
a
platform
to
build
and
deploy
manage
modern,
serverless
workloads?
A
What
a
server
list
mean
so
there's
this
huge
argument
about
what
server
this
means,
because
one
of
the
other
things
that
we
probably
should
have
mentioned
that
was
introduced
in
GCP
next
is
and
I
haven't,
had
a
chance
to
look
at
this
at
all,
is
serverless
containers,
and
so
serverless
containers
are
essentially
containers
for
kubernetes
or
that
you
can
access
through
kubernetes
without
having
to
manage
the
node.
So
in
some
ways
that's
like
AWS
Fargate
or
the
the
azure
container
instances.
A
Well,
those
are
serverless
also,
those
are
called
so
like
like
do
we
have
server
lists
running
on
server
lists?
Oh
so,
like
server
list
is
kind
of
meaning
over
time.
So
what
we'll
do
is
we'll
try
and
decode?
What's
actually
here
instead
of
buzzwords
like
serverless,
so
there's
a
bunch
of
primitives,
it's
aimed
at
developers
there's
a
lot
of
development
platforms
here,
flexibility
and
control.
I
think
this
is
interesting.
A
One
is
that,
because
Kay
native
is
built
as
a
layer
on
top
of
kubernetes,
what
that
means
is
that
you
can
actually
deploy
in
a
whole
bunch
of
different
places
and
I.
Think
as
we
look
at
like
the
you
know,
the
value
of
plat
of
a
platform
is
proportional
to
the
ubiquity
of
that
platform,
and
so
something
being
open
like
this
means
that
over
time
it's
going
to
be
available
in
a
lot
of
places
and
then
you'll
get
that
network
effect
around
value
and
I.
A
That's
an
interesting
thing
to
think
about
we'll
talk
about
some
of
that
and
some
of
the
things
that
I
think
are
core
scenarios
that
aren't
enabled
yet
from
what
I've
seen
and
actually
are
you
know
in
terms
of
operations,
especially
as
you
start
looking
at
multi-tenancy
multi-team
in
these
types
of
things
and
then
serverless
workloads,
I,
think
in
this
case,
what
they
mean
a
service.
They
mean
like
lambda
function
as
a
service.
This
idea
of
code
to
URL,
with
very
little
like
you,
know,
mecca
nations
in
between.
A
So
then
the
K
native
features
they're
serving
build
and
events,
and
so
I
think
I,
like
this
breakdown
in
the
componentization
that's
coming
here.
Is
that
really
you
know
you've
taken
sort
of
what
makes
up
a
pass
or
a
function
as
a
service
and
they've
exploded
it
into
these
component
parts.
I'm,
really
hoping,
as
we
dig
into
these,
that
these
things
can
be
used
independently
and
it's
not.
A
You
know,
sort
of
some
sort
of
hairball
and
I
think
that
there's
hints
that
that's
the
case
but
yeah,
and
so
so
we're
gonna,
be
you
know,
we'll
see
how
much
we
have
to
get.
We
can
get
get
through
in
the
sort
of
hour
and
a
half
that
we're
going
to
spend
with
T
gik,
but
I'm
going
to
start
with
the
with
the
serving
stuff
and
then
look
at
build
and
then
and
then
maybe.
If
we
have
time
we
can
get
to
some
of
the
eventing
stuff.
A
My
understanding
from
interacting
with
some
folks
on
Twitter
that
that
there's
been
sort
of
the
most
sort
of
you
know
road
miles
on
the
serving
and
then
build
and
then
eventing.
So
eventing
is
the
newest
out
of
these
things
and
then,
like
another
feature,
a
server
lists
add-on
for
gke.
Is
that
a
feature
of
Kay
native
or
is
that
a
gke
thing
and
I
think
this
is
one
of
those
things
where
it's
like
the
blending
of
like
what
K
native
is
versus?
What
is
a
Google
product?
What
is
open
and
what's
not
open,
is
I.
A
Think
confused,
because
the
is
really
a
google
cloud
marketing
page
and
then
we
got
a
blurb
and
then
there's
instructions
for
installing
Doc's
developer
resources,
build
build
templates
and
eventing.
So
we'll
try
and
dig
into
that
stuff
and
then
community
here
and
so
I'm
going
to
go
through
some
of
this
stuff
and
why
why
this
stuff
matters
so
extensible
at
the
top
and
pluggable
at
the
bottom?
Okay,
so
that's
what
Mark
is
saying
from
one
of
the
GCP
next
sessions?
Yeah?
A
Hopefully
you
know
we
can
actually
see
if
we
can
put
that
in
into
into
action.
Okay.
So
the
first
thing
here
I'm
going
to
reorder
my
tabs,
is
that
there's
a
there's,
a
mailing
list
that
you
can
join
called
K
native
users.
There
are
absolutely
0s
of
this
messages
on
that
thing.
I
did
join
it,
but
it's
still
important.
You
still
want
to
join
it
and
the
reason
you
want
to
join
it-
and
this
is
a
little
bit
of
inside
baseball-
is
that
obviously
Google
and
Googlers
are
there's.
A
Also
K
native
dev
I
was
going
to
call
it
attention
to
that
map,
but
you
have
to
apply
to
actually
be
a
member
of
K
native
dev
and
so
I'm,
like
you
know,
I
don't
know
like
and
there
was
no
documentation
around
who
could
apply
and
what
the
approval
process
was
and
sort
of
like
so
I,
just
assuming
that
that
was
were
was
for
the
you
know,
folks
in
the
in-crowd,
so
maybe
getting
some
documentation
around
that
or
maybe
that's
in
the
contributing
guidelines.
This
is
K
native
dev.
Here.
A
No,
it's
not
in
the
contributor,
so
yeah
so
I'm,
not
quite
sure
what
the
what
the
protocol
is
around
K
native
dev,
but
in
any
case,
okay
native
users
has
hero,
but
this
stuff
is
important
because
the
way
Google
works
is
that
they
use
G
suite,
like
everybody
else,
with
Google
Drive
and
all
that.
But
they
have
it
locked
down
more
than
a
lot
of
other
users,
and
so
Googlers
cannot
share
a
Google
Doc
with
the
world.
They
cannot
make
it
public.
A
They
can
share
it
with
external
people,
but
they
can't
make
it
public,
like
you
can
with
your
consumer,
Google
Doc
or
you
know,
if
you
actually
own
your
own
G
suite
as
an
administrator,
you
can
actually
set
different
levels
there.
This
creates
a
real
problem
in
the
kubernetes
world
and
I
assume
creates
a
problem
in
the
K
native
world,
where
Googlers
will
want
to
create
a
doc
share
with
the
community.
A
You
actually
then,
will
be
put
on
the
Akal
so
that
you
can
access
the
design
Doc's
and
that
stuff,
okay,
so
Keith,
says
que
native
dev
isn't
for
the
in-crowd.
They
let
me
in
okay.
So
then,
there's
K
native
dev
here
that
you
can
apply
to
join
I'm
gonna.
Do
this
I'm
gonna
say
link
it
to
my
hep.
Do
comm
profiles
say
please
let
me
in
apply
to
join
group
alright.
So
there
we
go
so
there's
K
native
dev.
Also,
then,
which
I
assume
if
I
get
in
actually
has
some
discussion
topics.
Okay.
A
Let's
see,
okay
and
then
k
native
def
gets.
You
invites
to
the
community
meetings
until
it
gets
too
big
and
then
Google
Calendar
breaks
down
with
ya
click,
everybody
look
busy,
and
so
now
the
goal.
My
understanding
is
that
eventually
you'll
be
able
to
go
to
slack
a
native
dev.
Let
me
link
a
Phi
that
does
that
work
like
a
native
dev,
but
last
I
checked
that
wasn't
hooked
up
yet
so
there
oh
no
way.
Where
did
that
go
what's
going
on
here?
I
have
to
do
HTTPS
all
right.
A
Well,
you
that
George
is
working
on
it;
okay,
so
that's
not
up
yet
so
they'll
get
that
set
up
because
I'm
slack
makes
you
sort
of
like
you
have
to
host
this
thing
and
often
times
people
host
it
on
Heroku,
so
maybe
they'll
host
it
on
K
native,
but
for
now
there's
a
link.
A
That's
in
the
notes
here
that
you
can
use
this
was
shared
on
Twitter
that'll,
get
you
access
to
the
to
the
slack
channel
and
so
they're
going
to
be
getting
there
alright
and
then
yeah,
and
so
they
that's
what
we
got
here
and
then
we'll
go
ahead
and
there's
the
K
native
github
organization.
So
another
thing-
and
this
is
again
the
difference
between
the
marketing
world
in
the
open
source
world-
is
that
there's
a
logo
for
K
native
and
there's
a
k
in
an
N
and
I'll
deconstruct
this
for
you
in
a
second.
A
But
what
you'll
find
is
that
that
logo
doesn't
show
up
anywhere
here
on
the
K
native
landing
page.
So
what
this
tells
me
is
that
the
branding
people
at
Google
hate
the
logo
and
and
that's
good
I-
think
in
some
ways,
because
you
want
the
open-source
community
to
have
its
own
identity.
You
want
to
create
some
white
space,
I,
think
between
between
the
the
corporate
overlords
and
the
open-source
community.
Alright,
so
so
viel
a
who's
who
I
worked
with
at
at
Google
before
I
left.
A
A
Before
we
went
public-
and
this
was
Brendan's
thing-
we
called
it
seven
of
nine,
originally
nonagon
nonagon,
exactly
it's
a
okay
nonagon
yeah
exactly
and
some
of
that's
awesome,
kay,
nonagon,
and
so
so
so
originally
Cooper
he's
called
seven
of
nine
because
we
wanted
to
create
a
friendlier
board
and
then
that
got
shortened
to
project
seven.
A
We
couldn't
go
public
with
that,
and
so
then
kubernetes
was
like
picked
out
a
desperate
exasperation,
but
when
we
did
the
logo
and
just
a
way
to
sort
of
like
harken
back
to
that,
it
ended
up
with
being
a
seven
sided
thing.
One
of
the
things
that
we
we
found
out
out
of
this
is
that
you
know
the
the
seven
sided
thing
is
actually
common
in
nautical
because
of
like
seven
seeds
and
so
it
season.
A
So
it's
not
uncommon
to
see
a
seven
spoke
steering
wheel
on
a
ship
or
the
the
Maersk
logo
actually
is
a
seven
pointed
star
and
so
yeah.
So
that's
that's
fun.
So
that's
the
key
native
logo.
There
alright
look
at
this,
but
kubernetes
based
platform
to
build,
deploy,
manage
modern,
surrealist
workloads.
Oh
it
matches
up.
Look
at
that.
There's
some
consistency
here,
check
it
out,
I,
wonder
when
that
changed.
A
So,
let's
dig
into
the
docks
I
like
that,
there's
a
docks
repo
from
the
start.
The
docks
are
actually
looking
pretty
good.
So
that
is
something
something
really
interesting.
Oh
and
so
Joe
saying
that
they
may
be
giants,
did
a
song
about
polygons
from
a
triangle
up
to
a
nonagon.
So
that's
fun.
Okay,
so
there's
there's
a
lot
going
on
here
and
there's
a
diagram,
but
the
idea
is
that
you
know
there's
K
native
and
it
you
can
integrate
with
other
stuff
and
there's
an
API
for
developers,
and
this
uses
kubernetes.
A
And
so
this
is
like
a
grade-a
market
texture
diagram
here,
but
there's
this
tio
in
the
mix
and
stuff.
So
that's
cool,
but
we're
gonna
go
ahead
and
we're
gonna
try
and
get
this
thing
installed
it
and
then
start
playing
with
it,
and
so
when
I
first
started
looking
at
this
right
after
I
was
announced,
there
was
just
these
instructions
and
I'm
like,
but
I,
don't
use
any
of
these
right.
I
mean
a
lot.
A
You
know
we
do
a
lot
of
stuff
with
AWS,
because
that's
where
so
many
of
our
customers
are
and
and
that
we
they're
eks,
and
it
turns
out
that
there's
a
problem
with
this
tio
1.0
and
the
container
injection
with
eks
I
think
folks
are
talking
about
and
working
through
that,
but
will
also
install
into
AWS
with
cube
admin
using
our
QuickStart.
That's
generally,
what
I
use
when
I'm
playing
around
with
stuff
and
so
so
I'm
like
it's.
It's
really
sad
that
there's
no!
A
So
that's
what
we're
going
to
be
going
through
since
I'm,
not
using
any
of
those
up
there
and-
and
this
is
going
to
walk
us
through
the
the
the
general
stuff.
Now
the
problem
John
with
eks
and
I-
think
you
know
if
folks
have
a
pointer
to
the
bug
number
they
can
dig
into
this.
Is
that
eks
with
the
API
server
does
not
have
the
mutating
admission
controller's,
enabled
that's
a
beta
feature.
A
You
have
to
go
ahead
and
and
enable
that
and
the
eks
folks
haven't
enabled
it,
and
so
that
means
that
the
automatic
injection
doesn't
happen
and
I.
The
instructions
here
say
that
you
have
to
set
is
T
up
for
automatic
injection
I,
don't
know
if
there's
a
way
to
run
it
with
manual
injection.
It
doesn't
look
like
that
and
so
and
we'll
go
into
a
little
bit
about
sort
of
what
the
injection
looks
like
and
sort
of
dig
into
that
a
bit.
Okay,
there's
a
little
bit
of
chatter
here
about.
A
When
you
use
managed
systems
like
gke
or
aks
or
eks,
is
that
there
will
be
a
set
of
configurable
x'
that
you
won't
have
access
to
and
over
time
in
the
kubernetes
world.
We're
working
as
it
makes
sense
to
pull
those
things
out
so
that
they
can
be
configured
within
the
cluster.
But
I
think
there's
going
to
be
some
stuff
like
you
know,
enabling
some
of
these
advanced
features
that
may
be
locked
down,
maybe
won't
make
it
to
that
being
able
to
change
on
the
fly.
A
And
so
you
can't
go
to
something
like
gke
and
say:
hey
I
want
to
use
my
own
LDAP
to
actually
you
know,
do
the
the
authentication
to
to
the
kubernetes
cluster
there,
because
gke
doesn't
give
you
access
to
those
two
animals,
and
so
it's
one
of
those
lock
down
access
to
tunable
x'.
That
makes
this
be
difficult
with
eks.
The
the
rule
for
these
types
of
features,
at
least
with
gke
I
believe,
is
that
anything
that's
beta
gets
turned
on
with
gke.
A
You
can
get
the
alpha
features
turned
on,
but
it's
a
sort
of
experimental
cluster
that
won't
be
upgradeable
and
I.
Think
you
know
it
self-destructs
after
a
while.
So
if
you
want
access
to
an
alpha,
Claude
Alpha
feature,
you
can
get
it
with
GK,
but
there
are
some
safeguards
to
make
sure
that
you
don't
take
that
stuff
into
production
all
right
a
little
bit
of
an
aside
there,
okay,
so
so
pretty
much!
What
we
do
is
we
install
Sto,
we
install
K
native
serving,
and
then
we
deploy
nap
all
right.
A
A
A
chef
is
essentially,
like
you
know,
curl
pipe
to
sudo
bash,
going
on
so
I
at
least
like
to
look
under
the
covers
to
see
what's
happening
here,
and
usually
what
I
like
to
do
is
dig
into
all
these
things
and
look
at
all
the
objects
that
are
coming
into
play
and
then
also
look
at
the
are
back
rules
that
are
being
configured
for
these
things.
Because
there's
a
question
of
like
hey.
Is
this
thing
essentially
acting
as
a
route
within.
A
Essentially,
acting
as
route
within
my
infrastructure,
or
is
this
something
that
can
be
safely
installed
into
a
namespace
independently,
and
what
you'll
find
here
is
that
you
know
sto
definitely
needs
to
be
routes
and
I
believe
that
in
the
instructions
for
4k
native
essentially
require
route.
Also
there
is
a
lot
of
V
amel
here
and
so
I,
don't
think
I'll
be
able
to
make
it
through.
All
of
the
sto
has
gotten
bigger.
Since
last
I
looked
and
I
think
I
should
probably
hist.
You
also
made
it
to
1.0.
A
A
So
one
of
the
first
thing
to
notice
here
is
that
we're
installing
sto,
but
we're
not
installing
stock,
is
do
so.
There's
a
version
of
sto
that
has
been
patched
to
work
with
key
native,
but
those
patches
I
was
looking
at
this.
Those
patches
are
very,
very
narrow
so
that,
hopefully,
that
won't
be
a
big
deal
for
folks
who
have
this.
Do
that's
running
in
other
ways.
A
Oh
there's
question
the
check
an
you
deployed
on
from
kubernetes
I.
Don't
see
any
reason.
Why
not?
There
is
a
question
I
think
in
terms
of
whether
you
need
any
persistent
volumes,
which
can
often
be
a
sticking
point
when
when
going
on
Prem,
but
it's
something
that
we
can
look
into
when
we
look
at
the
at
the
the
key
native
install.
A
A
Think
when
you
do
that
helm,
template
stuff,
it
doesn't
include
the
namespace
there
because
it
assumes
that
that
that'll
be
there
and
then
that's
done
out-of-band
of
this
and
then
there
another
patch
about
when
running
the
sto
proxy
and
like
this
is
a
template
inside
a
template
so
like.
So
this
is
the
thing
that
makes
this
stuff
Co
so
confusing
this.
This
symbol
here
is
essentially
take
everything
at
this
indent
level
and
treat
it
as
a
string.
A
So
it's
essentially
yeah
mol
embedded
as
a
string
into
other
yamo,
and
so
you
don't
get
validation
as
you
do
that,
and
then
we
have
more
yeah
Mille
embedded
as
a
string
inside
of
UML.
So
we
have
yeah
Mille,
that's
string,
a5
inside
of
llamĆ³,
that's
string,
a
fight
inside
of
you
amal
so
and
then,
and
then
sto
has
its
own
templating
engine
that
it
uses
for
its
templates,
but
it
uses
square
brackets
because
otherwise
it
would
conflict
with
the
curly
braces
that
helm
uses
with
the
stuff.
A
A
They
had
to
add
a
pre
stop
hook
here
to
make
sure
that
it
could
gracefully
exit.
Otherwise,
I
looked
up
this
bug
here.
Otherwise
what
happens
is
some
requests
were
getting
dropped
and
so
it's
sort
of
fast
scaling
with
the
sto
ends
up
being
a
corner
case
that
needs
to
be
solved
there,
yeah
and
so
Lu
Maddie
says
yeah,
so
it
looks
like
tiller
is
gonna
sunset
in
helm,
3.
Yet
the
current
proposal,
yeah,
that
is
I,
mean
yeah.
A
A
So
this
is
40
100
lines
of
yam.
Oh
I
can't
go
through
all
of
this,
but
but
essentially
there's
like
a
namespace
there's
a
bunch
of
config
maps.
The
config
maps
end
up
being
validating
like
configurations
for
admission
things
and
these
things,
so
it
ends
up
being
like
dynamic
config
for
these
other
things
and
so
yeah.
It
ends
up
being
relatively
like
it
gets
it's
very
meta
right,
and
so
this
I
mean
like
I,
got
to
be
honest,
like
this
and
I
think
when
we
start
looking
at
the
K
native
install.
A
This
starts
to
feel
like
the
pile
of
bash
and
salt
that
I
built
the
first
install
in
q4
Nettie's.
There
was
this
assumption
early
on
that
hey.
If
we
can
do
a
one-line
install,
then
everybody's
going
to
be
happy,
and
it's
going
to
be
great.
That
definitely
didn't
work
in
you
know
for
kubernetes,
because
we
found
that
the
the
environment
that
we're
installing,
in
you
know
in
were
different
enough
that
we
needed
to
be
able
to.
We
really
needed
to
be
able
to
customize
and
manage
that
stuff
over
time,
so
that
whole
cube
up
stuff.
A
That
I
was
involved
with
was
was
there
were
a
lot
of
lessons
to
be
learned
there?
Let's
just
put
it
that
way.
I
worry
when
I
see
this
much
yeah
mole
with
this
much
sort
of
meta
configuration
where
we
have
Yamla
inside
of
Hamill
that
actually
has
sub
objects
and
stuff,
like
that,
it's
it's
pretty
pretty
interesting
but
anyways.
This
is
this
is
a
validating
webhook
configuration
that
gets
installed
for
doing
validating
web
hooks
for
ISTE.
A
This
is
a
config
map
that
has
nothing
in
it
called
mapping
Kampf
we
have
another
one
which
is
sto,
mixer,
custom
resources,
and
so
these
are
CRD
that
get
defined.
I
guess!
Oh
no!
These
are
crĆ¼e
so
once
the
CRTs
are
defined,
these
are
ones
that
get
created,
I
guess
so
these
it's
essentially
instead
of
sequencing.
These
things
there's
something
else
that
actually
goes
ahead
and
creates
these
CR
DS,
and
so
some
of
this
is
an
attribute
manifest
I,
don't
know
what
that
is,
and
so
we
have.
A
We
have
a
list
of
yamo
embedded
as
a
data
item
in
a
config
map.
Yeah
yeah
Matt's
like
okay,
so,
like
so
could
case
on
a
help
here,
perhaps
I
don't
know.
This
is
pretty
what
I
would
love
to
see
and
I
think
just
taking
a
step
back
for
something
like
this.
One
of
the
things
that
Google
had
internally
was
a
system
called
NPM
where
you
could
take
a
bunch
of
data
files,
and
you
could
version
and
similar
to
the
way
that
you
could
version
a
and
there's
a
public
talk
on
this.
A
So
I
can
say
the
words
NPM
and
you
conversion
that
and
distribute
it
similar
to
the
way
that
we
deal
with
container
images
today
and
so
I
would
love
to
see
when
you
get
like
config
maps
that
are
this
big.
We
should
really
view
that
as
sort
of
a
version
of
all
signed
tarball
that
gets
referenced
right
because
I
think
at
some
point
you
just
have
too
much
that
you
can't
see
the
structure
of
what's
going
on
here
and-
and
this
has
to
be
machine
generated
to
pull
all
this
stuff
together
and
I.
A
Think
it's
just
it
ends.
Up
being
you
know,
assembly
code,
it's
like
trying
to
decode
webos
m,
so
yeah
yeah,
an
episode
on
sto
1.0
would
definitely
be
good.
Thank
you
for
that
and
then
there's
this
thing
called
standard
I/o,
here's
an
access
log,
one
that
gets
installed.
Here's
TCP
access
log,
so
there's
a
whole
bunch
of
stuff
and
then
there's
like
request,
count,
request.
Duration.
A
These
are
metrics,
oh
man,
so
there's
a
whole
bunch
of
config
for
sto
that
actually
gets
bundled
up
into
config
Maps
there's
Prometheus
here,
and
this
is
like
what
I
this
is
a
Prometheus.
That's
part
of
the
sto
namespace,
so
I'm
like
is
this
different
from
like
a
Prometheus
operator,
is
like
Prometheus
operator
built
into
sto.
Is
this
do
as
a
system
D
of
kubernetes?
A
So,
okay,
so
there's
a
whole
bunch
of
here.
That's
configuring,
Sto
I'm,
not
gonna,
go
through
this
I'm,
just
gonna
Yolo
it.
Hopefully
it
doesn't
break
well.
I
do
want
to
see
like
let's
look
at
the
kinds
that
were
we're
doing
here,
so
I'm
going
to
do
grep
kind,
St,
oh
yeah,
MO,
and
so
we're
gonna
get
through
a
bunch
of
these
okay
and
then
we
there's
custer
resource
definitions
that
get
installed,
and
then
we
get
down
to
cluster
rolls
and
we
have
one
for
the
mixer.
A
We
have
and
then
there's
a
binding
okay,
here's
a
job,
it's
a
post,
install
hook
to
do
some
stuff
with
yeah,
alright,
so
yeah.
So
one
of
the
interesting
things
is
that
when
you
do
use
tiller,
it
can
actually
do
workflow,
like
things
I'm,
not
sure
how
that
gets,
flattened
out
when
you
do
the
the
helm,
template
stuff,
yeah,
so,
okay,
so
and
then
there's
service
accounts.
So
there's
a
whole
bunch
of
here.
My
guess
is
that
and
then
we
have
bash
in
here:
sweet.
A
It's,
not
it's
not
DevOps
without
bash
alright,
and
so
you
can
go
into
the
sto
chart
and
actually
see
where
all
these
things
are
being
pulled
from
all
the
different
files.
So
we're
not
going
to
go
through
that,
but
we
do
get
down
there.
Services
deployment
services,
so,
let's
kind
service.
The
one
thing
I
do
want
to
check
is
just
to
make
sure
that
we're
not
going
to
get
called
out
with.
A
With
the
way
that
type
peoples
load
balancer
services,
work
with
AWS
I
want
to
be
able
to
actually
dig
into
that.
So
we
have
so.
This
is
a
service
without
a
type.
So
it's
close
to
right,
P,
here's
another
cluster
IP
service.
This
is
for
the
egress
gateway,
the
ingress
gateway.
This
is
type
equals
load
balancer.
It
doesn't
have
any
annotations
on
it.
A
So
it's
not
doing
anything
funky,
because
when
you're
using
GCP
or
GCE
type
equals
load
balancers,
you
can
do
some
stuff
to
essentially
reserve
preserve
the
incoming
IP
address
or
use
the
proxy
protocol,
or
anything
like
that.
I
just
wanted
to
make
sure
that
we
weren't
seeing
any
of
that
stuff
going
on
here
and
we're
not
doing
anything
special.
It's
just
a
regular
old
type
equals
load
balancer.
A
If
you
are
running
this
on
Prem
and
you
don't
have
a
load
balancer
installed
a
couple
great,
you
found
Dinah's
talk
on
the
on
the
user
on
the
NPM
stuff
that
was
from
like
forever
ago,
but
it's
a
fascinating
talk
on
NPM
I
also
referenced
that
on
a
blog
post
on
my
blog
a
while
ago.
So
George
will
get
that
added
in
and
then
yeah.
So
so
you
can
change
this
to
a
node
port
and
then,
however,
you
get
to
the
node
port,
things
will
continue
to
work.
A
This
is
actually
invalid
yamo
because
it
has
a
duplicate
of
the
app
thing
here.
So
I'm
just
gonna
go
ahead
and
delete
that
just
so
it
doesn't
make
stuff
look.
Bad
galley
is
the
new
Sto,
okay
yeah,
so
egress
gateway
comes
with
a
bunch
of
stuff,
oh
and
then
we
have
the
ingress
gateway
again.
So
ingress
gateway
is
essentially
like
an
ingress
and.
A
So
it's
a
bridge
from
the
outside
world
into
into
other
things,
and
so
it
has
a
bunch
of
stuff
going
on
here,
and
this
is
fascinating,
so
the
sto
stuff,
the
sto
deployments,
don't
have
requests
on
their
stuff.
So
CPU
we
have
the
requests,
but
there's
no
Ram
requests
there,
which
means
that
there's
a
chance
that
this
stuff
goes
down.
A
If
you
actually
stretch
your
cluster
too
much
now,
there's
a
good
chance
that
when
you're
running
well,
no
it's
creating
its
own
namespace,
so
you're
not
going
to
get
by
default
any
sort
of
default
quota
with
the
default
namespace
on
gke.
They
do
add
a
default
quota
so
that
you
actually
get
this
stuff
by
default.
But
I
don't
believe
that
will
happen
with
with
these
other
ones.
A
And
so,
if
you
are
gonna
run
this
stuff
in
production,
you
probably
want
to
make
sure
that
you
put
limits
on
all
that
and
you
set
up
the
right
pod
security
policies
and
in
quota
default.
I
forget
what
that
objects
called
on
all
of
your
on
all
of
your
name
spaces,
including
these.
Okay.
So
and
then
here's
mixer
and
we
got
a
bunch
of
I,
finish
affinity,
stuff
and
all
that-
and
this
again
also
just
has
CPU.
It
doesn't
talk
about
Ram.
So
so
that's
really
interesting.
A
Ten
milliseconds
and
then
there's
telemetry
wow.
There's
a
lot
going
on
here.
Okay,
so
Yolo
cube,
control
version,
I
have
a
something
running
on
AWS,
using
our
QuickStart
I'm
running
193
here
and
and
it's
pretty
plain,
vanilla
kubernetes.
So
let's
keep
our
fingers
crossed.
Cube
control
applied
a
chef
Estie,
oh
yeah
moe!
Look
at
that!
Go
okay,
so
nothing,
nothing
barf!
So
that's
good!.
A
A
And
I,
because
I
knew
that
we
were
installing
a
lot
of
stuff.
I
have
I,
have
like
five
nodes
here,
so
we
have
five
nodes
running
so
so
you're
sure
you
deleted
the
right
duplicate
label
there
Joe
could
be
pretty
wonky
with
that
deployment.
Otherwise
you
know
what
that
particular
label.
My
guess
is
here
and
I'll
go
through
and
I'm
happy
to
update
it,
but
that
was
a
label
on
the
top
level
deployment,
and
this
is
something
that
helm
tends
to
add.
A
This
is
like,
because
deployment
is
kind
of
the
top
level,
there's
generally
nothing
that
is
using
that
label
for
to
key
off
of
I,
think
maybe
it
might
be
for
like
monitoring
and
stuff,
if
you're
using
Prometheus
against
it
but
like
we
can
go
through
and
let's
see
if
we
well,
while
we
wait
for
everything
to
come
up,
we'll
we'll
we'll
look
for
that
was
a
sto
egress
gay
way.
Now
that
does
get
actually
put
on.
That
is
the
app
that
gets
okay.
So
it's
that
one
I.
A
Wonder,
oh
man
and
there's
more
stuff
broken
well,
let's
see
we'll
see
if
here's
what
I'll
do
is
I
will
I
will
download
it
again.
So
we
have
an
unpatched
version,
even
though
it's
bad
llamo
we'll
apply
it
again.
That
should
fix
all
that
stuff
and
then
we'll
do
cube
control
get
pause
all
namespaces
alright.
So
we
got
a
lot
of
stuff
coming
up
here
now.
A
Okay,
yeah
I,
don't
know
which
one
wins:
it's
invalid,
yeah
Mon
when
you
actually
have
the
duplicate
key,
but
apparently
the
the
kubernetes
yamo
parser
doesn't
seem
to
care
okay,
so
it
looks
like
all
the
stuff
is
up,
will
do
n
sto
system
get
services
wide
and
what
we'll
see
is
the
only
service?
That's
a
load
balancer
here.
This
is
now
on
GCP.
A
When
you
allocate
a
load
balancer,
because
google
networking
is
pretty
awesome,
you
get
one
IP
address
that
actually
works
for
that
what
happens
is
with
with
ELB
is
and
by
default
use
the
classic
yield
bees
at
Google
I
mean
with
with
the
AWS
provider
cloud
provider
if
and
I
haven't
played
around
with
this.
Yet
there
is
support.
That's
been
coming
down.
The
pike
for
NL
bees,
which
are
essentially
they're
a
different
type
of
load
balancer
and
with
that
you'll,
go
ahead
and
get
a
you'll
get
an
IP
address,
similar
to
Google,
I
believe
but
anyways.
A
This
ends
up
being
a
cname,
and
you
want
to
make
sure
that
you
refer
to
this
when
you
actually
do
things
versus
actually
referring
to
the
IP
addresses
that
are
underneath
this,
because
those
will
change
over
time
as
Amazon
essentially
rotates
the
the
VMs
that
are
implementing
this
load
balancer
behind
the
scenes.
Okay,
so
and
then
somebody
says:
get
CR
DS,
whoa,
sto
installs
a
lot
of
CR
DS.
They
are
totally
there
for
CR
DS.
We
got
oppa's,
we
got
no
ops,
I,
don't
even
know
what
those
are
no
op
cat.
A
That's
her
CRD,
deniers,
cert,
connaissez
I
have
no
idea
check,
nothing's,
no
idea.
What
all
this
stuff
is.
Wow,
there's
a
lot
there.
Ok,
so
that's
a
different!
That's
a
different
episode!
Ok!
Now,
let's
do
the
same
thing
with
and
I'll
try
and
move
faster
here,
because
I
think
you
know
my
point
here
is
made,
but
we're
gonna
look
at
K
native.
A
A
So
and
its
7:17
thousand
lines
of
yamo
now
I
started
looking
at
this
and
I
haven't
had
a
chance
to
totally
deconstruct
it
and
a
lot
of
it
is
things
like
we
have
like
a
dashboard
here,
and
so
this
ends
up
being
like
a
big
JSON
file
that
gets
encoded,
I
believe
into
a
config
map,
and
so
that
ends
up
being
a
lot
of
lines
but
I
just
I.
Don't
like
this
pattern,
I
mean
once
you
get
past
like
twelve
lines
in
a
config
map,
we
really
need
a
better
solution.
A
I
think
we
really
need
something
like
NPM
in
kubernetes
to
be
able
to
manage
this
because,
like
look
at
this
is
I'm
still
in
that
same
config
map.
Now,
there's
like
a
one
megabyte
limit
for
config
map,
so
they
haven't,
they
haven't
actually
gone
there
yet,
and-
and
so
this
is
a
config
map-
and
the
metadata
here
tells
me
that
this
was
not
generated
with
helm.
A
A
A
A
A
It
might
be
a
cute
control
bug:
okay
yeah,
it
has
to
be
at
the
end.
I
wouldn't
expect
it
to
have
to
be
at
the
end,
so
you
are
working
on
splitting
it
and
and
okay,
so
I'm
looking
to
hear
so
Matt's
saying
like
90%
is
telemetry
and
stuff.
You
can
do
this
thing
where
it's
no
monitoring
that
actually
will
be
a
version.
That's
a
lot
smaller
working
on
splitting
it,
okay
cool.
So
what
do
we
got
here?
So
so
we
have
a
monitoring
namespace,
but
this
isn't
marked
like
a
native
monitoring.
A
This
is
just
monitoring,
and
so
is
this
installed
for
the
entire
cluster.
Is
this
installed
like?
Is
this
installed
just
for
the
stuff
that
Kay
native
is
doing
so
I'm
a
little
bit
confused
for
like
what
this
monitoring
is
for
here,
because
when
we
looked
at
the
three
components,
monitoring
wasn't
one
of
them
get.
A
Fluent
eds-
and
so
this
thing
well,
it's
doing
like
host
paths
and
stuff
to
be
able
to
do
this.
Okay,
so
this
isn't
cool
I
mean
just
to
be
clear
because,
like
this
is
some
serious,
like
you
know,
cluster-level
stuff,
that
you're
mucking
with
here
wait
is
this
this
this
may
be.
This
may
not
be
is
this?
Is
this.
A
This
is
this:
is
this
K
native
fluent
D
Damon's,
okay
yeah?
It
is
yeah
I
understand
what
it's
doing,
that
it's
exporting
container
logs
for
everything
in
the
cluster
and
that's
great,
but
I
think
you
know
we
got
to
recognize
and
I
think.
This
is
something
that,
like
like
folks
who
are
really
running
kubernetes.
A
They
already
have
solutions
for
some
of
this
stuff
and
figuring
out
how
to
integrate
with
that
versus
just
whacking
that
stuff
in
there
is
something
that
I
think
should
really
be
well
documented
and
probably
broken
out,
because
you
know
like
there
are
like
a
lot
of
different
ways
that
people
think
about
collecting
logs
and
yeah,
and
it
was
unexpected
that
it
was
gonna
stall.
A
daemon
set
that
I
was
gonna,
start
mucking
around
with
host
paths.
So
that's
that's
interesting,
okay,
but
it
looks
like
it
actually
got
on
there.
A
A
A
A
Yeah,
so
it's
it's
launching,
but
it's
not
there
did
you
miss
the
sto
label
command
after
installing
sto
the
sto
label
command.
We're
gonna
have
to
do
that
on
the
default
namespace
we
haven't
gotten
there.
Yet
I
did
know
that
so
the
site,
the
sidecar
injector
pods,
aren't
running.
So
that's
actually
the
thing.
That's
actually
we're
having
trouble
with
so
I'll
teach
you
all
how
to
add.
This
then.
A
A
A
A
A
A
A
A
A
A
A
A
Okay,
so
cops
isn't
working
either,
but
the
cops-
that's
probably
1:9
because
cops
at
least.
Let's
look
at
this
issue
here.
You
picture
a
label
namespace
default,
so
yeah
so
I
doing
the
team
control
label
namespace
default.
We're
gonna
have
to
do
that
before
we
actually
start
doing
stuff
in
default,
but
that
only
affects
stuff
in
the
default
namespace.
So
that's
not
the
issue
that
we're
running
into
here.
So.
A
A
Okay,
so
your
name,
space,
lifecycle,
Limor
and
your
surface
composition,
volume
led
fall,
sort
of
default,
Toleration
sexes,
mutating
validating
resource
quota,
node
restriction,
priority
yeah.
This
isn't
only
an
e
KS
issue,
maybe
because
the
duplicated
entry
was
deleted
in
sto
mo
when
I
was
first
applied.
I,
don't
think
that
was
it
because
I
actually,
like
I
mean
we
can
go
ahead
and
we
can
blow
it
away.
A
Do
control
delete
Sto
well,
we'll
do
we'll
just
we'll
blow
it
all
the
way
here.
A
A
A
And
so
that
ends
up
being
the
issue,
and
so
it
may
be
that
there's
a
configuration
in
terms
of
this
now,
what
we're
doing
is
I
launched
it
using
core
DNS,
which
is
the
newer
DNS
client
for
kubernetes,
that
cube
admin
has
switched
over
to
I,
wouldn't
be
surprised
and
I.
Don't
think
GK
is
using
core
DNS.
Yet
is
it.
A
So
I
don't
think
gke
is
using
core
coordinates,
or
maybe
it's
not,
and
so
that
could
be
the
problem
that
we're
actually
Amy.
So
it's
not
coordinates.
Okay,
so
it
could
be
that
core
DNS
is
slightly
different
from
the
way,
and
so
this
this
particular
issue
that
we
had
here,
the
you
know
somebody
saying
hey,
that
should
be
a
valid
name
and
other
people
are
saying
no,
that
it's
not.
A
A
A
Could
it
be
a
cert
thing,
because
I
mean
it
could
be
a
certain
thing
or
it
could
be
a
timeout
thing,
so
this
whole
like
like
having
to
do
search
with
doing
what
mutating
web
hooks
and
stuff
like
that
is
a
total
pain
in
the
butt
yeah
total
pain
in
the
butt,
and
that's
why
we
need
something
that
actually
is
below
this
level
and
that's
why
I
think
spiffy
could
actually
play
that
role,
but
we're
not
there
yet.
Okay,
so
I'm
gonna
use
Corgi,
which
is
my
little
like
demo
thing.
A
A
Unless
I
actually
say
the
disable
is
do
thing
there,
because
even
if
you're
not
doing
the
injection,
it
has
to
actually
go
through
the
web
hook.
So
the
web
hook
can
do
nothing.
It
has
to
at
least
check
the
annotation
to
make
sure
yeah,
I
think
so
part
of
this
Paul
I
think
yeah.
We
can
definitely
have
one
that
has
K
native
deployed
and
stuff
like
that,
but
I
think
most
users
are
going
to
be
coming
at
this.
Going
like
hey
I,
want
to
understand,
what's
happening
under
the
covers
here.
A
What
is
the
user
experience
on
this
stuff
and
I
think
the
user
experience
can
be
great,
but
we
also
have
to
pay
attention
to
making
this
stuff
work
and
actually
being
able
to
maintain
it
over
time
and
I.
Think
that's
the
place
when
you
have
like
you
know
thousands
of
lines
of
yeah
mol.
It's
it's
there
is
this
question
of.
How
does
this
evolve
over
time?
Do
administrators
actually
have
the
insight
to
actually
see
what's
going
on
there?
A
A
A
A
A
Maybe
it's
like
trying
to
add
the
admission
controller
webhook
before
the
service
was
ready
and
then
it
got
in
some
sort
of
backup,
loop
back
off
loop
or
something
so
I'm,
not
sure
what
happened
there?
Okay,
so
we
got
enough
going
so
that
we
can
actually
make
this
stuff
work.
So
I'm
going
to
go
back
here
and
we'll
close
that
we'll
close
that
and
now
it
says:
okay,
let's
go
ahead,
get
started
with
a
key
native
app
deployment.
A
Nobody's
told
me
yet.
So
this
is
instructions
that
are
missing
here.
But
if
I
didn't
do
the
label
thing,
so
we
got
to
do
that
yeah!
Okay!
We
can't
forget
to
do
that.
Okay,
so
what
I
just
did
here
is
I
labeled,
the
default
namespace
that
says
anything
that
you
any
pods
that
you
start
here
will
get
the
sto
sidecar
installed
so
see
what
it
was
is
to
it.
I
am
applied
more
than
once
it
was,
but
it
but
applying
a
llamo
should
be
idempotent
and
so
yeah.
A
So
so
that
shouldn't
have
been
an
issue,
but
maybe
it
was
I,
don't
know
so
now,
if
I
do
the
the
quar
DQ
control
port
forward,
Cordy
8080
will
that
work?
Now,
if
I
go
to
localhost
8080,
look
at
that,
I
can
actually
look
around
and
I
have
stuff
running
and
I
can
do
DNS
and
it
all
works
now,
but
but
now
what
we're
going
to
acute
control
get
Pardee.
A
What
we'll
see
here
is
that
this
is
a
much
more
complicated
pod
definition
than
just
my
stuff,
because,
like
my
stuff
here
is
Cordy
I
do
have
well.
So
this
is
image
blah
blah
blah.
Now
here's
another
entry
and
the
ordering
is
weird,
but
this
is
the
car
that
gets
inducted
and
it
has
a
bunch
of
command
lines.
A
It
has
a
bunch
of
environment
variables
and
then
we
also
have
an
init
container
that
does
a
whole
bunch
of
stuff
to
get
the
proxy
up
and
running,
and
so
when,
when,
when
sto
does
its
injection,
it's
actually
fairly
invasive
in
terms
of
the
way
that
it
goes
ahead
and
and
puts
that
sidecar
in
there.
So
when
people
say
oh
just
runs
this
to
you
as
a
sidecar,
there's
something
deep
voodoo
going
on
there
as
you
actually
go
through
this
stuff,
and
so
it's
just
something
to
be
aware
of.
A
If
you're
running
with
this
do-
and
you
start
looking
at
your
pod
definitions
and
stuff
you're
gonna
be
like
whoa
somebody
mucked
with
this-
that
was
sto
that
was
mucking
with
it.
Okay,
so
we
got
that
up
and
running
now.
Okay,
so
now
what
we're
gonna
do
is
we're
gonna.
Do
let's,
let's
install
an
app
deployment,
so
we'll
get
the
deployment
stuff
here.
So
we
have
a
new
object
here
called
service.
A
Service
D,
animal
and
what
we're
gonna
be
doing
here
is
yeah
yeah,
there's
capabilities
and
stuff
to
be
able
to
do
this,
oh
yeah,
so
yes,
do
it's
actually
pretty
invasive,
which
is
like
when
I'm
like
hey.
Do
we
really
need
this
to
you?
It's
like
well
that
just
adds
more
complexity
than
management.
It
adds
more
complexity.
Now
you
get
a
lot
with
it
too.
It's
not
like
ISTE
Oh
doesn't
bring
value
to
the
table.
I
think
it's
just
important
to
realize
that,
like
you
know
that
that
is
actually
going
to
create
some.
A
You
know
a
whole
bunch
of
a
whole
bunch
of
extra
stuff
running
in
your
cluster
that
you
have
to
be
aware
of.
Okay,
so
metadata
hello
world
go
so
this
thing
is
a
thing
called
a
service
now,
what's
confusing
here
is
that
in
kubernetes
a
service
is
essentially
a
routing
mechanism,
and
if
you
read
sort
of
the
book
that
I
wrote,
kubernetes
up
and
running
I
describe
service
at
its
core
basis,
form
as
a
named
label.
Query
right
and
then
you
can
use
that
lame.
A
Named
label
query
as
sort
of
a
service
discovery
thing
and
then
layer
on
proxying
on
top
of
it
right
and
then
there's
an
idea
of
a
service,
an
sto
which
is
like
it's
super
duper
fancy
version
of
a
service
that
actually
did
to
a
kubernetes
service
what's
happening
here.
Is
that
and
I
believe
this,
and
you
guys
can
correct
me
if
I'm
wrong
is
that
a
que
native
service
is
a
deployment
plus
the
routing
rules,
plus,
perhaps
even
a
build
configuration.
So
it's
wrapping
up
all
of
that
stuff
into
a
single
single
object.
A
Yeah
I
haven't
applied
that
yeah,
ma
yeah
and
so
I
think
what
we're
doing
here
and
I
don't
know.
I
mean
this
is
this
is
nested
pretty
deep,
but
we
have
run
latest
configuration
and
we
have
a
revision,
template
and
I
don't
and
then
spit
and
like,
and
so
then
this
is
another
thing
and
I
think
we
do
a
shitty
job
of
this
across
kubernetes
is
and
I
haven't,
read
this,
but
there's
there's
here's
a
description
of
the
different
things.
A
So
it's
all
these
things
brought
together
and
what
I'd
love
to
see
is
this
stuff
broken
down
a
little
bit
more
because
it's
like
well
what,
if
I
want
to
use
a
different
thing
to
build
myself
or
what,
if
I,
want
to
use
a
different
thing
to
do
routing
right
is
there
a
way
that
we
can
actually
start
breaking
this
up
into
sort
of
finer,
grained
pieces
of
componentry
so
now
and
then
the
other
thing
and
we
do
a
crappy
crappy
job
of
this
is
okay.
This
isn't
bad
right,
but
like
across
kubernetes.
A
A
And
you
know
you
get
to
this
page
here
and
you
say:
well
let
me
look
at
the
1:10
thing.
It
loads
up
this
huge
page.
Now,
Phil
did
this
page
and
it's
great,
and
so
I
can
look
at
like
here's
service
here,
and
it
does
talk
about
like
a
bunch
of
the
the
things
that
go
into
service.
But
what
we
end
up
with
is
all
these
sub
objects,
because
kubernetes
configurations
it
gets
pretty
nested,
and
then
you
have
to
click
around
this
stuff
all
over
the
place,
oh
and
that
just
goes
to
8:00
p.m.
A
yeah
and
so
that
that's
not
and
then
it
just
like.
Does
this
and
the
scroll
back
doesn't
work
well,
and
so
here
we
have
service
and
I
can
go
serve
a
spec
which
jumps
me
here
and
then
I
have
a
bunch
of
so
like
it
deeply
nested.
It's
really
hard
with
this
documentation
to
get
a
feel
of
sort
of
like
what
is
actually
going
on.
A
I
think
this
is
better
because
at
least
it's
something
that
you
can
read
and
it
has
examples,
but
we're
not
good
at
having
canonical
documentation
around
the
around
the
the
amel
that
we're
actually
doing
so.
Okay,
so
here's
service,
where
we
have
regular
old
kubernetes
metadata
we
have
spec
run
latest
versus
pinned,
can
you
have
both
of
these
is
pinned
I'm,
so,
okay,
only
one
of
run
latest
or
pin
can
be
set
in
practice.
So
that's
really
confusing
that
it's
like
this
is
really
a
like.
A
Hey
I
want
to
do
something,
but
I
only
want
to
run
the
latest
or
I
want
to
run
a
specific
revision,
but
instead
of
actually
having
like
a
type
of
like
run,
configuration
and
you
say
like
well-
I
want
to
run
latest
or
I
want
to
run
a
revision.
You
actually
have
to
do
it
by
changing
this
part
of
it,
but
I.
Guess
these
things
are
it's
the
same
object
same
sub-object
there.
A
So
that's
really
confusing
to
me.
Okay
and
then
we
have
a
build
name,
and
then
we
have
a
container.
What
is
this
container
for?
Is
this
the
container
that
I
want
to
run
yeah
and
so
I,
then
here
I
can
run
my
containers
and
stuff,
so
I
actually
specify
my
containers
or
there's
something
called
a
concern:
concurrency
model
I,
wonder
what
that
is,
and.
A
A
And
how
can
I
yeah
Wow,
okay
and
then
there's
status
that
we
definitely
get
here
so
I'm
kind
of
confused
okay
revision,
template
spec
is
like
pod
template
spec,
okay
I
get
that,
but
how
does?
What
is
the
model
for
revisions?
Can
I
actually
look
at
my
historic
revisions?
It
is
it
like
deployment
where
I
can
see
you
know.
Does
it
manage
a
set
of
I
guess
you
said
like
what
we're
saying
here
is
that
if
we
look
the
resource
type
documentation,
configuration
records
the
history
of
revisions?
Does
it
use
that
using
CR
DS,
I?
A
A
All
right
and
I
have
to
do
serving.
Does
that
work?
Okay?
So
if
I
just
do
because
it's
called
service,
when
you
just
do
get
service
with
cube
control
gives
you
kubernetes
services,
it
doesn't
actually
give
you
the
the
K
native
services,
and
so
that's
one
of
the
implications
of
reusing
the
service
name
here
and
then.
A
Yeah
and
then
we
do
and
then,
if
we
just
look
at
the
raw
amel
here,
what
we'll
see
is
that
we
have.
We
have
conditions
which
is
interesting,
so
this
is
similar
to
like
transition
time.
So
it
looks
like
this
probably
borrows
code
from
this
probably
browse
code
from
like
the
deployment.
We
have
a
domain
that
this
thing
is
actually
hosted
on.
We
have
an
internal
domain.
A
I
want
to
I
want
to
make
example,
that
vt
gik,
calm,
observe
generation
and
then
traffic,
there's
traffic
routing
that
actually
says
that
hey
we
have
a
hundred
percent
going
to
the
hello
world,
go
configuration
name,
but
we're
and
that's
a
revision
name.
Okay.
Can
I
actually
send
this
to
a
different
configuration
or
is
that
part
of
the
service
I
didn't
see
name
in
the
configuration,
so
I'm
not
sure
how
this
model
works
here.
A
Fun
bugs
with
the
CR
D
short
names
yeah,
so
okay
and
then
there's
a
revision
name,
but
how
do
I
actually
okay
and
then
there's
a
lattice
latest
revisions
and
stuff
like
that,
so
I
can
I
can
I
do
like
revision
I
like
revision,
I,
just
guessed
that
there
we
go
so
the
revision
okay.
So
the
relationship
between
K
native
service
to
revision
is
similar
from
deployment
to
replica
set.
So
we'll
go
ahead.
A
A
Okay,
so
I
mean.
Maybe
you
can
answer
this
for
me,
Matt
like.
Why
are
you
all
even
using,
like
maybe
deployments
right
because
deployments
help
you
to
do
upgrade
and
that's
the
value
that
they
bring,
but
you
guys
are
doing
your
own
upgrading
by
actually
dealing
with
revision.
So
you
view
a
revision.
I'm
assuming
is
immutable
and
you
rotate
those
things
around
so
I'm
confused.
Why
revisions
would
be
managing
deployments
versus
actually
just
managing
replica
sets
directly
and
then
I
guess
you
have
like
maybe
extra.
A
Extra
data
on
you
know
so
you
want,
like
so
I
think
you
know.
In
some
ways
the
revision
becomes
a
strongly
typed
thing
with
a
with
a
with
a
replica
set:
yeah
cuz
I'm,
not
I'm.
Not
it's
not
clear
to
me
what
deployment
is
bringing
to
the
party
alright.
So
we
have
this
thing
running
which
is
sweet,
and
so
so
let
me
like,
let's
map
this
stuff
out
here.
A
Yeah
I
mean
so
okay.
So
the
question
is
like
okay,
interesting
nuances,
user
removable
versus
operator
immutable
yeah,
like
people,
can
always
go
in
muck
with
stuff
I
mean
you
can,
like.
You
know,
write
zeros
to
your.
You
know
you
can
DD
def
0
to
your
SD
1
right
and
that'll
like
break
your
machine.
Like
you
know
at
some
point,
don't
do
that
so
we
have
here
a
service,
and
this
manages
a
revision
and
that
manages
a
deployment.
A
A
Yeah,
but
I
mean
if
you're
gonna
change
your
image,
that's
a
different
revision
right
at
the
end
of
the
day
you
probably
like,
like
I
mean
the
whole
idea,
is
that,
like
when
you're
using
deployment,
you
generally
view
your
replica
set
is
managed
by
the
deployment
and
you
rarely
deal
with
it
directly
now.
The
service
will
also
manage
a
zoom
like
like
the
SEO
resources
and
I.
Don't
know
those
well
enough
to
actually
say
what
it
is:
it's
probably
a
virtual
service
and
probably
gateway
stuff,
and
all
that.
A
A
Get
all
didn't
used
to
actually
get
CR,
DS
and
stuff.
Now
it
apparently
gets
all
they're
not
kidding
when
they
say
well,
it
does
still
doesn't
get
all
I.
Think
there's
something
you
have
to
do
to
register
I'm
still
confused
on
how
this
extensibility
work,
cuz
I
can
still
cute
control.
Get
you
know,
I,
don't
think
service
accounts
here,
yeah,
so
that's
not
included
in
all,
but
a
lot
of
these
other
ones
are
all
right.
So
we
have,
and
then
we
have
configuration.
A
A
A
A
Note
IP
I,
okay,
so
we
mentioned
that
and
then
we
get
the
Earl
for
our
service.
A
A
Echo
host
Earl,
okay,
so
that's
hello
world
that
go
default
example
that
URL
now
your
old,
your
friend
is
confused.
He
needs
help
as
she
works
on
her
math
workbook
yeah,
no
I'm
figuring
this
out
tell
her
that
this
is
the
learning
process
and
we're
all
figuring
this
stuff
out.
So
if
I
go
through
and
I
do
that,
I
get
a
bad
URL.
A
A
A
A
Maybe
oh
look
at
work
now.
Did
that
spin
up
a
did?
That's
that
didn't
spin
up
a
pod,
because
we
already
had
one
running
I,
think:
oh,
it
was
terminating
so
like
it
does.
It
wasn't
spinning
up
a
pod,
so
did
it
scale
to
0
sweet?
Look
at
that?
Okay,
that's
really
cool
cute
control.
Get
pods
I,
see
here
so
now
we're
running
so
it
actually
brings
up
three.
A
A
And
now,
if
I
do
this
again,
it's
really
fast.
We
got
to
keep
it
alive.
We
got
to
juggle
it
right.
Oh
three
containers
one
pod.
Ok,
that
makes
sense
ready.
Three
three
you're
right,
my
my
dad.
Okay,
you
you
got
to
wait.
Five
minutes
now
to
be
able
to
get
it
to
work.
I
wish
there
was
a
way
to
sort
of
force.
It
take
a
few
seconds
for
k-
scale,
up
your
application
and
return
a
response.
A
Alright,
so
one
of
the
things
I
want
to
do
here
is
and
we're
gonna
run
out
of
time,
so
I
feel
like
we
just
started
scratching
the
surface
of
service,
but
at
least
we've
gone
through
enough
stuff
that,
like
at
a
future
TGI
kay,
we
can
definitely
start
digging
into
the
details
of
this,
because
I
still
want
to
know
more.
A
lot
know
a
lot
more
okay,
so
I
can
do
cube,
control
deck
and
K
native
serving
edit
and
big
map
and
fig
autoscaler
and
oh
look.
A
A
A
A
So
so
let
me
show
you
all
a
trick:
cute
control
and
Kay
native
serving
get
deployments,
and
we
want
to
change
the
autoscaler
and
then
we're
going
to
edit
this
thing.
The
way
that
deployments
work
is
they
end
up
taking
a
hash
over
the
information
that's
under
under
in
the
template,
and
if
that
has
changes,
they
do
a
redeploy.
So
we
need
to
make
that
hash
change,
but
I
don't
want
to
actually
change
any
of
the
real
stuff
in
there.
So
you
can
just
add
an
annotation
here
which
is
like,
like
kick
its.
A
A
A
So
that's
pretty
cool
okay.
So,
while
we're
waiting
for
that
to
scale
down
again
because
I
want
to
see
the
live
scale
up
again,
I
felt
like
I,
didn't
really
look
at
that
like
we
needed
to
what
I
do
want
to
do,
and
this
was
one
of
the
things
that
I
don't
know
where
I
saw
it,
but
I
did
want
to
like
I
would
did
want
to
do
like
the
custom
DNS
stuff
using
a
custom
domain.
Okay.
A
So
what
we
can
do
here
is
okay,
so
we
have
to
we're
gonna
change
the
custom
domain
here,
okay,
so
we're
example.com.
What
happens
if
you
have
multiple
of
these?
Oh
that's
right!
You
can
actually
have
like.
Oh,
my
god,
one
blurry,
whoa
I,
think
I
was
seeing
that
you
can
actually
say
well.
I
want
to
use.
You
know
for
this
set
of
things.
I
want
to
use
this
domain
for
this
other
set
of
things.
I
want
to
I
want
to
use
this
other
stuff.
A
A
Outside
of
this,
do
but
sto
doesn't
work
with
other
other
ingress
systems,
but
so
I
wish.
We
could
do
that,
but
one
of
the
things
that
I
think
you
want
to
do
is
you
want
to
say:
hey
I
want
to
give
this
namespace
permission
to
sit
on
this
particular
host
name
right.
So
imagine
that
you
know
you're
a
company
you're
running
a
multi
team
cluster
I
know
this
is
your
stuff.
It
has
a
selector,
but
but
you're,
then
programming
sto
under
the
covers,
and
between
you
and
sto,
there's,
essentially
no
way
to
enforce
that.
A
Hey
the
only
this
namespace
can
use
this
particular
this
particular
host
name
or
this
part
of
the
Ural
space.
Even
if
you
all
actually
lock
this
stuff
down,
sto
would
still
let
anybody
squat
on
any
namespace,
and
so
you
know
what
you
could
do
is
say.
Nobody
else
can
actually
then
go
ahead
and
edit
sto
rules
right.
You
could
do
that
with
our
back
and
say
only
only
K
native
can
edit
the
sto
rule,
so
you
could
go
ahead
and
do
that.
A
But
now
that
means
that
nobody
else
can
deploy
anything
using
is
unless
you
use
cane
ADIS.
Okay
native
essentially
becomes
a
wrap
around
this
stuff
instead
of
one
of
many
that
might
be
running.
On
top
of
this
deal
and
I
think
that
that's
a
loss
for
the
ecosystem,
when
you
can't
run
these
different
things
side
by
side,
because
you
may
want
to
do
like
a
batch,
MapReduce
e
type
of
thing-
that's
gonna
run
on
top
of
kubernetes.
A
Maybe
it
uses
this
do
under
the
covers
to
do
stuff
to,
like
suppose,
it's
data
pages,
and
you
can't
do
that
with
K
native,
because
these
things
would
actually
you
say,
we
run
our
own
gateway,
I,
wonder
what
controls?
No,
you
can't
do
with
your
own
gateway,
because
that
you
know
kubernetes,
our
back
is
actually
based
on
role
path
and
the
Gateway
doesn't
show
up
in
the
URL
path,
and
so
there's
no
way
to
do
it.
A
A
Maybe,
if
sto
were
optional,
it'd
be
a
lot
easier
to
start
plugging
into
stuff
like
that,
because
that's
one
of
the
fundamental
things
that
we're
thinking
about
with
contour
is
how
do
we
actually
deal
with
multi-team
in
the
policy
around
who's
able
to
squat,
on
which
domain
names
at
wish
to
point
right?
And
so
you
guys
are
solving
this,
where
it's
like?
Oh
hey!
A
If
I
have
a
you
know,
example.com
and
then
I
have
a
name
space
called
my
app
right,
so
you
have
my
app
dot
the
example.com
and
you
want
to
be
able
to
squat
on
that.
I
think
that's
great,
that
you're
providing
a
default
hostname,
but
like
should
that
be
something
that's
only
available
to
K
native
apps
or
should
be
that
be
something
that's
available
to
apps
that
maybe
are
using
K
native.
A
Maybe
not
are
there
ways
that
we
can
break
these
things
down
so
that
there's
more
reusable
pieces,
and
so
that's
the
thing
that
I'd
really
love
to
see
all
right.
So
what
anyways
I
want
to
do
this
so
like
now
we
have
that
work
in
and
if
I
do,
okay,
so
now
that
changed
it
so
I.
So
this
is
beautiful,
because
I
change
that
config
file
and
then
like
boom
everything
got
fixed
up,
which
is
really
really
fun,
and
so
now
what
I
can
do
is
if
I
do
the
get
service.
A
A
And
I
have
tjk
dot
IO
here
and
it's
just
I
can
update
my
cname
here
for
this
stuff
too.
Oh
wait.
I
was
copied
the
wrong
thing,
not
host
URL
it's.
A
What
do
we
call
a
echo
IP
address,
even
though
it's
not
an
IP
address?
Okay,
so
there
we're
gonna,
actually
go
ahead.
We're
going
to
update
that
and
I
want
to
do
like
five
seconds
yeah.
So
I
have
this
thing
with
a
really
short
TTL,
because
we
change
these
things
all
the
time,
all
right.
So
now
that's
updated,
but
that
start
at
t,
GI,
k,
dot,
o,
and
so
one
of
the
things
that
we're
hitting
here
is
that
with
DNS.
A
You
can
do
these
wild
card
records
like
this
typically,
but
they
only
go
one
level
deep
right,
and
so
what
we
end
up
needing-
and
this
is
unfortunate
also
is
we
need
a
we
need
a
because
we
actually
have
sir,
so
this
thing
ends
up
being.
Where
was
it
hello
world?
You
talk
slap,
go
dot
default,
dot,
T
ji
k,
dot,
IO,
my
DNS
record
that
I
just
did
isn't
going
to
be
good
enough
here.
A
We're
going
to
go
five
seconds
here
and
I'm
going
to
create
this
and
then
yeah
and
then
you're
talking
blue
green,
because
that
adds
yet
another
level
on
top
of
this
stuff,
because
you
end
up
with
version
dots
app
named,
you
know,
and
a
lot
of
this
is
a
lot
of
this
ends
up
being
based
on
a
lot
of
the
experiences
for
the
experience
around
App,
Engine
and
so
yeah.
So
I'd
love
to
actually
see
this.
A
Do
Auto
Dinah's
with
external
DNS
and
I
was
going
to
get
to
that
I
mean
but
like
again,
like
I,
don't
know
like
and
then
what
we've
seen
here
is
that
all
of
this
is
still
plain
old,
HTTP
and
you'll
notice.
That
Chrome
now
is
nice
enough
to
like
add
when
we
do
this
to
show
egg.
Hey
you're
not
running
secure,
so
we
need
to
do
TLS
and
the
TLS
story
with
this
do
is
actually
really
weak
right
now.
A
Also,
there's
no
support
for
cert
manager,
which
is
generally
the
accepted
way
for
doing
let's
encrypt
with
kubernetes.
Let's
insert
manager
in
sto
is
really
you
know
some
rough
going
to
be
able
to
do
that.
You
can
do
cert
manager
to
do
certificates
with
the
DNS
challenge,
but
doing
the
HTTP
challenge.
Certain
manager
doesn't
know
how
to
program
is:
do
ingress
routes
so
yeah.
So
that's
that's
a
little
bit
tricky
here,
but
anyway.
So
now
we
have
default
at
t.
Gi
k,
dot
io.
A
If
we
go
to
what
well,
if
now,
if
we
go
and
let's
go
through
and
click
this
and
o
site
cannot
be
reached,
name
not
resolved
did
I,
oh
no
I
screwed
up
now,
I'm
gonna
have
to
so
it's
start.
Our
default
t,
GI
kada,
diode
I,
not
wait
long
enough
star
dot
default,
T,
GI,
k,
dot,
IO,
oh
I,
see
named
it
to
itself
I'm
an
idiot
okay.
I
wanted
to
go
here.
Okay,
we're
gonna
have
to
wait
five
seconds.
A
A
A
Yeah,
so,
okay,
so
we
got
that
going.
So
that's
one
of
the
things
that
gets
tricky
is
that
you
have
to
set
up
DNS
and,
as
you
set
up
DNS,
you
have
to
do
it
for
each
namespace
and
then,
as
we
start
doing
further
like
so
either
I
mean
one
of
the
things
that
we
could
do
here
is
that
we
could
run
a
DNS
server
on
cluster.
That
actually
knows
how
to
like
populate
these
things
on
demand.
A
That
might
be
good
for
developer
scenarios,
where
you're
going
to
have
this
deep
stuff
and
then
actually,
you
can
use
DNS
for
sort
of
real
host
names
for
production
stuff.
So
that
may
be
a
good
way
to
go
here.
Also,
so
that's
pretty
cool
okay,
so
I
am
going
to
I'm
gonna,
actually
call
it
here,
I
didn't
get
through
nearly
as
much
stuff
as
I
really
wanted
to
hear
and
I
think
a
big
part
of
that
was.
A
You
know
the
the
hiccups
that
we
hit
with
the
admission
controller
and
getting
that
stuff
up
and
running
I
honestly,
I'm,
not
sure
what
happened
there
to
be
honest,
it
could
be,
it
could
be.
Those
duplicate
values
in
yamo
and
maybe
those
things
are
non-deterministic
and
and
when
you
have
those
duplicates
and
I
deleted
the
wrong
one
and
then,
when
I
reapply
I,
don't
know,
I,
don't
know
what's
going
on
there.
So
we
definitely
have
that
George.
Thank
you
so
much
for
helping
out.
A
So
Brendon
has
the
award
right
now
all
right,
okay,
so
so
thoughts
on
Kay
native,
so
I'm
on
top
of
sto.
That's
a
pretty
small
stack.
Sto
brings
a
lot
of
power,
but
it
brings
actually
a
lot
of
complication
on
top
of
it.
Also
and
I
think
this
is
something
that
we
got
to
be
aware
of.
It
brings
value
and
I.
Think
I,
like
the
problems
that
is,
do
solving
I,
think
it
does
make
this
stuff
be
that
much
harder
from
an
operational
experience
to
figure
out.
What's
going
on
demos?
A
Well,
right,
the
dashboards
look
awesome!
We
didn't
get
into
the
dashboards
I'd
love
to
see.
The
install
story
like
like,
like
K
native
itself,
is
broken
up
into
these
different
components,
whether
you're
talking
the
server
versus
the
build
versus
the
eventing
stuff
I
think
that
that's
great
that
they're
viewing
it
as
a
bunch
of
sort
of
building
blocks
that
work
well
together,
I,
like
projects
and
I.
A
Think
if
you
looked
at
the
kubernetes
api,
you
know
there's
these
Voltron
moments
where
you
say
you
have
all
these
different
things
that
come
together
and
then
all
of
a
sudden,
they
add
up
to
something
that
integrates
well.
But
each
of
those
things
in
sort
of
like
on
their
own
can
add
a
lot
of
value
and
so
I'd
love
to
see
more
of
this
sort
of
Voltron
come
out
with
K
native,
so
that
we
can
have
a
larger
ecosystem
of
things.
Working
together,
like
can
I.
A
You
know
the
fact
that
the
build
stuff
is
built
into
the
into
the
service
thing
bothers
me
a
little
bit
because
I
think,
like
I'd
love
to
see
that
auto
scaling
down
to
zero,
along
with
the
assumptions
Alon.
How
requests
are
done
and
all
that
stuff
which
we
didn't
dig
into
I'd
love
to
see
that
separated
from
the
building
right
and
maybe
from
the
from
some
of
the
well.
You
have
to
have
something
for
the
service
hookup
to
be
able
to.
Well.
Maybe
we
don't
right
I'd
like
to
think
like.
A
What
I
haven't
seen
are
great
examples
and
it
you
know
in
terms
of
how
you
actually
do
that
in
a
way
where
you
can
start
extending
it
with
things
that
are
not
sort
of
like
thought
about
from
the
start
and
I.
Think
that's
for
me.
That's
one
of
the
things
that
has
been
the
most
exciting
about
kubernetes
is
that
is
that
you
see
people
come
and
and
start
reusing
the
primitives
in
ways
that
we
never
thought
about
from
the
get-go.
That
is
I.
A
Think
is
super
super
exciting,
and
to
do
that,
you
have
to
have
these
things
loosely
coupled
and
so.
I
see
the
the
beginnings
of
that
with
with
K
native
and
I'd
like
to
see
even
more
of
that
I'd
like
to
see
that
sort
of
building
block
type
of
a
thing
separate
from
the
sort
of
the
whole
experience
you
need
both
of
those
you
need
a
great
end
user
experience,
and
it's
clear
that
was
the
first
goal
here,
but
I'd
also
like
to
see
those
building
blocks
actually
work
together.
A
A
Let
me
switch
back
to
the
screen.
I
just
I
want
to
see
this
happen.
One
more
time
here,
cube
cube,
controlled,
get
pods,
okay,
so
they're
not
running
and
with
watch
and
0.5.
So
we're
updating
this
every
half
a
second
here
and
and
what
I'm
gonna
do
is
I'm
gonna
go
to
the
hello
world.
Do
a
reload
go
back
and
look
at
that
there
knitting,
pod,
initializing
running
and,
and
it
completed
I,
so
sweet.
That's
super
super
cool
all
right,
so
thank
you.
Everybody
who
joined
me
I'm.
Sorry,
we
went
way
over.
This
went
long.
A
We
hit
a
bunch
of
those
hiccups
with
with
this.
Do
I
definitely
want
to
do
more.
Episodes
on
K
native
I
want
to
dig
into
what's
going
on.
There
I
feel
like
I,
just
scratched
the
surface,
so
look
forward
to
future
episodes
here,
probably
not
next
week.
I
think
I
got
a
I
got
kid
duty.
Also,
so
thank
you
again.
Everybody
and
thanks
for
watching,
please
hit
the
like
button
and
stay
tuned
for
more
in
the
future.