►
From YouTube: TGI Kubernetes 076: Exploring KEDA with RabbitMQ and Go
Description
Come hang out with Kris Nova as she does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Kris talking about the things she knows. Some of this will be Kris exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
Hello
and
welcome
to
TGI
K
I
am
I,
think,
let's
see
I'm
two
minutes
late
today,
I
accidentally
locked
myself
out
in
the
hallway
again
so
who's
just
standing
on
the
hallway,
like
t
TI
k
is
about
to
start,
but
I
don't
have
a
key
to
get
back
in.
Luckily,
one
of
my
very
nice
colleagues,
let
me
back
in
anyway
happy
Friday
everyone.
It
is
good
to
see
everyone,
let's
see
what's
going
on
in
chat,
oh
my
gosh,
so
many
people
chatting
already
okay.
A
So
this
week's
hacking
D
is
first
good
to
see
you
happy
unassuming
I.
Think
George
is
out
today,
so
I
think
we
have
Duffy
on
the
line.
If
not
I
can
also
drop
stuff
in
the
chat
as
well.
So
I'll
kind
of
be
playing
the
role
of
both
myself
and
of
George
slash
Duffy.
Today,
okay,
very
first
one
we
have
is
Seth
miles,
I
finally
get
to
see
one
live.
A
How
do
you
from
Bryan
Texas
Bryan
Texas
I,
feel
like
I've,
been
there
before
I
used
to
live
in
Texas
anyway
good
to
see
you
Seth
Liu
Maddie,
one
of
my
favorite
folks
to
join
happy
Friday.
Everyone
no
vote
for
president
I
would
be
a
terrible
president.
I
might
be
like
better
like
general
or
something
but
I
can't
make
decisions.
Suresh
says:
hey.
A
We
have
Gustavo
hello
from
Chicago
Martin
hello
from
the
Netherlands
Antoine
hide
from
Paris
Christoph,
hello
from
düsseldorf,
Germany,
crazy
mm,
oh
hello,
for
happy
Friday
from
Virginia
Mizzou
for
hi
from
Dubai.
Let's
see
Wow
a
lot
of
people
here
today,
Herman
hi
from
the
internet.
We
eating
you
from
Sweden
getting
in
Jenin
from
Rotterdam
hello
from
Finland,
learn
tech
to
innovate,
says
hello,
Nova,
dot,
Geert
hide
from
Belgium
hi
I
have
here
from
New
York
City.
So
that's
funny,
hi
hi
hi!
A
That's
how
you
say
like
hi
like
a
friendly,
hi
and
Icelandic
like
if
you're
really
walking
up
to
you
know
the
gas
station
or
whatever
you
just
be
like
hi
hi
and
anyway,
that's
just
cool
that
you
typed
that
okay,
let's
see
what
else
Tim
hello
from
Germany
pshoo
hello,
there
Lynn
hello
from
North
Carolina
I'm,
assuming
NC
hello
from
Belgrade
Plano
Texas,
one
of
my
best
friends
in
Plano,
Texas,
Brazil
hi
from
Ukraine
I.
Think
that's
the
first
time.
A
We've
seen
Ukraine
here,
hello
from
Johannes
burg,
hi
from
oh
I
thought
it
said
Iceland,
but
it's
Ireland,
hello
from
Ireland,
hello
from
New,
York,
City
and
hello
from
Bangladesh
in
Denmark,
and
it
looks
like
more
and
more
people
are
jumping
into
the
chat.
Anyways,
hey
everyone,
happy
Friday,
welcome
to
TGI
K
we're
gonna
be
talking
about.
It's
pronounced,
I
actually
talked
to
one
of
the
authors.
It's
pronounced,
Takeda
I
might
call
it
kita,
but
we'll
see
how
it
goes
and
we
are
here
live
in
the
hep
tio
studios,
and
this
is
kind
of
sad.
A
So
today
is
our
last
day
here
at
the
Seattle
office,
so
everybody
from
hep
D
I
was
out
here
in
the
hall
we're
about
to
have
kind
of
like
a
go
anyway
saying
goodbye
to
the
office
party.
So
after
this
episode
of
TGI
ka
I
get
to
tear
it
all
of
this
stuff
down
load
it
up
into
my
Subaru
and
we're
gonna
hang
on
to
it
until
we
get
our
new
office
over
in
Bellevue.
A
So
this
will
be
the
last
tea
gik
in
this
room
and
it's
it's
really
sad,
and
so
what
I
think
I'm
gonna
do
is
after
this
is
done,
I
mean
like
see
it
back
like.
Do
you
like
a
3d
panorama,
so
y'all
can
see
exactly
how
horrible
this
desk
is
it's
covered
in,
like
M&Ms
and
like
pencils
and
like
diet,
coke
cans?
It's
really
great.
So
anyway,
that's
a
bit
of
sad
news,
but
we're
getting
bigger
and
better.
So
it's
also
exciting
at
the
same
time.
A
Okay,
so
earlier
somebody
dropped
into
hacking
D,
it's
the
very
first
link.
It's
here,
I'll
drop
it
in
again.
You
can
click
on
that
and
that's
what
we're
gonna
be
pulling
up
here
in
a
second
okay,
so
I
just
put
that
back
in
the
chat.
Let's
go
back
to
my
screen:
let's
actually
go
and
open
this
thing
up.
Bam
and
let's
bat
drag
it
up
here.
A
There
we
go
so
this
week,
like
I,
said
we're
doing
exploring
theta
I'm
gonna
really
try
hard
to
get
this
right
because
I've
been
calling
it
kita
in
my
head
all
week,
exploring
kata
with
rabbitmq
and
go,
and
we
have
some
stuff
to
talk
about
and
we're
gonna
try
to
make
this
week
a
little
bit
fun,
but
I
think
I'm
kind
of
on
my
own
here.
So
we'll
see
how
we
can
how
much
fun
we
can
have
and
how
we
can
orchestrate
a
lot
of
this
anyway.
A
The
first
thing
that
we
wanted
to
do
was
this
week
in
kubernetes,
so
I
wouldn't
fund
some
stuff.
That
I
think
is
exciting,
be
like
it
to
once.
I
can
point
up.
The
chat.
Kristen
says
for
Eminem's
are
the
best.
We
don't
have
four
items
but
like
this
is
I
bought
this
and
I'm
pretty
sure
I'm,
the
only
one
who
ate
anything
so
we're
like
the
past
six
months,
so
that
those
are
the
M&Ms
that
I've
been
munching
on
here.
A
The
first
thing
we
have
is
we
have
this
really
exciting:
blog
post
or
not
blog
post,
but
like
this
guy
I'm,
assuming
it's
a
guy.
Somebody
put
this
up
on
reddit
and
they're,
giving
away
a
few
of
the
cloud
native
rejects
tickets
to
give
away
for
free
so
regardless,
if
they're
still
available
or
not.
Thank
you
so
much
for
doing
this.
This
is
great.
A
I
wanted
to
point
this
out,
because
I
know
a
lot
of
folks
will
get
invited
to
conferences
or
they'll
have
like
an
extra
bed
in
their
hotel
room
or
they'll,
get
an
A
or
B
and
B
or
something,
but
there
are
a
lot
of
people
out
there
who
will
take
advantage
of
this,
and
it
can
be
a
really
big
game
changer
for
them.
So
if
you
ever
get
an
opportunity
where
you
can
help
someone
out
you'd
be
surprised
how
many
people
are
actually
interested
in
stuff
like
this.
So
thank
you.
A
So
much
for
doing
this
and
then
also
that's
gonna
T
us
off
for
this
conference
in
general.
So
we've
seen
this
a
few
other
times
in
the
industry,
but
this
is
the
first
time
we're
doing
it.
The
click
the
cloud
native
way-
and
this
is
right-
around
cube
con.
So
the
idea
here
is,
if
you
submitted
a
talk
for
cube
con
and
for
whatever
reason
you
didn't
get
accepted
or
you
couldn't
make
it.
A
You
could
also
submit
that
same
talk
here
and
we're
putting
together
sort
of
like
the
the
cube
con
talks
that
didn't
make
it
post
conference-
that's
unofficial.
So
if
you're
going
to
be
in
Barcelona
for
cube
con
I'd
say
also
come
and
check
this
out.
A
lot
of
these
talks
are
going
to
be
really
great.
There's
one
of
our
folks
here
at
VMware
Jason.
He
works
on
cluster
API.
I
know
Carolyn
from
Microsoft
she's
amazing
Rita
from
Microsoft
also
amazing.
So
there's
a
lot
of
good
talks
here
and
honestly.
A
These
are
the
talks
that
I
feel
like
folks,
really
put
their
heart
and
soul
into
so
that
you'll
actually
probably
take
away
quite
a
bit
more
from
this
conference
than
you
would
from
cube
con.
In
my
opinion,
anyway,
people
really
really
like
to
give
talks
and
if
they're,
that
passionate
about
it
you,
but
that
they
worked
pretty
hard
on
it.
Okay,
so
that's
cloud
native
rejects,
so
here's
where
you
can
go
and
talk
to
Lawrence
about
the
tickets-
and
here
is
the
link
to
cloud
native
rejects.
A
Okay,
so
dad
says:
can
we
make
an
episode
about
HPA's
with
custom
metrics?
Please
please,
please
I'm
pretty
sure.
That's
today's
episode,
honestly
we're
going
to
be
talking
about
aged
HPA's
and
the
way
that
kata
works
is.
It
has
custom,
metrics
custom
metrics
server
that
it
plugs
into
it,
and
it
does
all
that
through
CID,
but
we'll
look
at
all
that
here
in
a
little
bit
but
yeah.
If
we
know
if
we
don't
touch
on
it
today,
we
can
certainly
touch
on
it
later
but
yeah.
A
Once
we
get
into
the
kata
part
of
the
episode
we
can.
We
can
talk
more
about
that.
Okay,
so
next
we
have
visualizing
UML.
So
this
is
exciting,
so
I
came
across
this
and
the
first
thing
I
saw
was
how
do
you
visualize
dependencies
in
kubernetes,
UML
and
then
I
like
asked
myself
that
question
it
was
like
Nova.
How
do
you,
how
do
you
think
and
like
I,
literally
just
think
of
like
a
notebook
of
yeah
Mille
and
then
I?
Don't
really
imagine
like
the
relationships
or
how
it
fits
together.
A
A
This
might
be
like
a
cool,
color
coding
or
something
for
ya
Mille
to
make
it
easy
for
us,
and
the
first
thing
I
do
is
I
just
kind
of
started
scrolling
and
then
I
was
like
oh
whoa,
there's
a
ton
of
like
really
cool
graphi
relationshipy
things
going
on
so
then
I
went
back
to
the
top
and
actually
went
through
it
and
read
this
but
yeah.
It's
just
talking
about
some
different
tools.
A
There
isn't
any
static
tool,
analysis,
ml
files,
but
you
can
visualize
dependencies
in
your
cluster
with
weave
scope,
cube
view
or
sto
Holden
says
his
smiley
face,
I
think
I'm,
pretty
sure,
Holden
who's,
trying
to
say
hi
anyway,
hi
Holden,
it's
nice
that
you're
on
my
stream
and
I'm,
not
hanging
out
on
yours,
ashish,
says
hello:
Chris,
hashtag,
supernova,
Ashish,
another
one
waves,
hi
good
to
see
everyone
so
yeah
anyway.
If
you
want
to
like,
have
awesome
fancy
graphs
for
your
yeah?
A
Well,
you
can
use
any
of
these
three
tools
here
and
if
you
want
to
see
what
they
look
like,
you
can
come
through
and
say:
okay
option,
one:
here's
we've
scoped,
and
here's
like
this
really
cool
gif
and
it's
a
project
from
our
friends
at
GCP
hats
off
to
our
tcp
friends
and
you
can
come
in
and
you
can
sort
of
explore
your
kubernetes
cluster,
so
gosh
I
think
he
was
last
year,
I
wrote
this
tool
called
krex
here
it
is
here
it
stands
for
kubernetes
resource
Explorer
and
oh
there's
a
that's
funny.
A
Okay!
So
if
you
come
in
here,
you
can
see
that
we
have
like
all
of
these
different
Explorer
implementations
and
it
allows
you
to
like
walk
through
your
kubernetes
cluster
based
on
label
matching,
so
you
could
come
in
and
you
could
say:
okay,
I
have
a
pod
with
this
label
and
then
I
want
to
like
find
the
deployment,
that's
associated
with
that
and
then
I
want
to
find
the
service.
That's
associated
with
that,
and
then
I
want
to
find
the
namespace
that's
associated
with
all
the
things
and
corrects
kind
of
like
lets.
A
You
do
it
and
put
like
bakes
and
some
convenience
tools
and
stuff
like
you,
can
spin
up
a
debug
container
and
whatnot.
So
this
was
like
kind
of
like
a
prototype.
I
never
never
finished
it,
and
it's
not
like
don't
run
this.
It's
not
good,
but
this
idea
has
been
a
problem
for
a
lot
of
folks
for
a
long
time,
which
is
how
do
I
actually
start
to
visualize
and
navigate
my
kubernetes
cluster?
A
How
do
all
these
different
resources,
which
are
inherently
relational,
come
together
and
how
do
I
start
thinking
about
them
as
a
human?
So
here
we
have
some
tools
that
kind
of
help
us
with
this.
So
this
first
one
is
we've
scoped.
This
other
one
looks
really
exciting
because
I
just
like
the
colors
and
it's
it's
like
simplified,
but
it's
called
cube
view
and
you
can
actually
go
in
and
you
can
see
like
the
different
yeah
mul
and
see
how
things
are
amount
gather
and
explore
your
resources.
A
This
way,
and
last
but
not
least,
we
have
good
ole
sto
here,
that's
showing
us
how
our
traffic
is
working,
and
so
this
is
more
like
a
bad
network.
Topology
and
I've
seen
some
libraries
like
this
in
JavaScript
that
can
help
you
build
these
interesting,
Maps
and
I
think
you
can
drag
them
around,
but
yeah.
A
If
folks
are
interested
in
these,
we
could
probably
do
like
a
visualization
episode
where
we
kind
of
like
look
at
all
three
of
them:
there's
linker
D,
there's
kubrick's
and
there's
Jaeger,
and
then
there's
probably
even
some
more
that
we
can
find
if
we
really
start
digging
into
it.
But
yeah
people
are
starting
to
finally
visualize
how
kubernetes
working
on
how
everything
is
coming
together
and
I.
Think
that's
really
exciting.
A
Okay,
let's
close
this
this
this
next,
oh
this
one's,
exciting,
kiiis,
so
the
sort
of
joke
here
everybody
knows
k8
us
some
people
say
Cates
here.
Let
me
let
me
do
this
duck
ham.
Oh
y'all
aren't
supposed
to
see
these.
Yet
we're
going
to
talk
about
these
later.
You
didn't
see
those
so
with
Cates.
If
we
take
the
word
kubernetes
and.
A
We
count
one
two,
three,
four,
five,
six,
seven
eight.
We
replace
those
eight
characters
with
the
number
eight
and
that's
where
cakes
came
from,
and
this
is
just
smaller
and
it's
a
little
bit
more
twitter
friendly
than
this,
and
so
this
is
like
our
three
letter.
Kate's
and
then
you
know,
there's
observability,
which
I'm
gonna
kind
of
abbreviate
here,
observe
and
then
dot
dot,
dot,
ability
right.
That's
where
the
word
oli
comes
from,
which
is
there's
an
11
in
the
middle
cuz.
There's
going
to
be
11
characters,
that's
about
the
word
observability.
A
So
kiiis
is
a
joke
on
this
first
one
that
says
it's
like
Kate's,
but
five
times
smaller.
So
that's
where
kiiis
comes
into
play:
Pokemon,
Kate's,
yeah,
so
yeah,
let's
see
here,
let's
go
back
to
my
screen.
Did
you
do
so?
And
of
course
the
very
first
thing
we
see
at
the
top
of
the
k3s
page
is
its
lightweight
kubernetes,
it's
a
smaller
version
of
kubernetes,
and
we
see
this
analogy
based
on
like
the
Linux
kernel
right
in
Linux
distributions.
So
we
all
know
there's
alpine
Linux,
there's
Linux
from
scratch.
A
A
That
is
also
small
and
lightweight
for
folks
who
want
to
run
kubernetes
on
small
devices
like
maybe
you
want
to
run
it
on
the
Raspberry
Pi
or
you
were
want
to
do
some
like
micro
instances
or
something
this
might
be,
something
that
you
would
be
interested
in
running
usually
lightweight
implies
simpler
and
in
some
cases,
can
imply
a
more
or
less
stable,
but
yeah
you
see
here,
I
just
great
for
IOT
great
for
arm
great
Persei.
Iot
is
the
internet
of
things
so
like.
If
you
wanted
to
put
your
espresso
machine
on
the
internet.
A
Arm
is
the
processor
that
doesn't
really
get
SuperDuper
hot
that
we
have
in
all
of
our
smart
phones,
see
I,
custom
integration
and
that's
how
you
have
continuous
integration.
Excuse
me:
that's
how
you
get
your
containers
built
while
lead
says
one
podcast
calls
kiiied,
says:
keys,
yeah,
I,
don't
know
how
I
feel
about
that
I
kind
of
like
kiiis
I,
don't
know.
I,
don't
know
I'm
not
in
a
naming
mood,
okay.
A
So
anyway,
this
is
k4.
Yes,
I
think
I'm
gonna
do
TJ
I.
Can
this
next
week?
I
really
want
to
do
it,
but
let
me
know
what
folks
think
and
if
anybody's
tried
it
or
can
point
me
to
any
resources.
This
is
a
new
project.
It
just
came
out.
You
can
come
to
github
and
you
can
see
it's
from
our
friends
over
at
Rancher
and
here's
the
github
repo
and
you
can
see
it's
already
got
6,000
stars.
So
this
is
exciting
that
this
was
released.
So
come
check
this
out.
A
If
this
is
your
thing,
so
that's
k3s,
okay,
CFP
for
North
America.
So
if
you
go
to
Twitter
well,
here's
my
Twitter
account
everyone.
If
you
go
to
Twitter,
you
can
see
the
scene
CF
tweeted,
San
Diego.
Here
we
come
a
CFP
for
cloud
native
and
you
can
click
on
the
link
here
and
it'll.
Take
you
to
the
actual
CFP
page
for
kube
con
North
America.
So
the
way
cube
con
works
is
there's
two
Cube
cons
every
year.
There's
one
in
EU!
Well,
there's
three!
Now
there's
one
in
EU!
A
There's
one
in
China
and
there's
one
in
North
America,
the
one
in
North
America
changes
its
location,
the
one
in:
u
changes
its
location,
I,
don't
know
about
China,
but
we
try
to
space
them
out
like
two
years
ago.
It
was
in
Seattle
and
then
last
year
was
in
Austin
and
then
this
year
it's
in
San,
Diego,
I.
Think
I
did
that
right.
Wow
I
can't
believe
I've
been
going
to
these
for
three
years
anyway.
A
If
you
want
to
submit
a
proposal,
go
here,
I'm
happy
to
help
and
take
a
look
at
this
I
think
this
is
gonna,
be
the
first
cube
con
I,
don't
submit
any
CFP
Stu.
Just
so
I
can
just
kind
of
enjoy
cube
con
for
once
and
not
be
stressed
out
the
entire
time,
but
yeah
I'm
willing
to
take
a
look
help
people
out.
If
not,
we
can
connect
you
with
other
folks.
A
If
you're
interested
in
submitting
the
CFP,
we
can
always
do
everything
we
can
as
a
community
to
get
you
what
you
need
and
increase
your
chances
of
getting
accepted.
It
looks
like
we
have
some
people
in
check,
there's
also
Keith
Rios.
What
is
k3
oats?
Let's
show
this
the
kubernetes
operating
system.
What
is
what
is
this
now
rancher
labs
k3o?
Is
an
operating
system
completely
managed
by
kubernetes
it
launches
in
seconds?
Ok.
So
this
is
also
cool.
A
Okay,
it's
k3o
s,
get
started
with
k3
OS,
so
I
think
it's
an
operating
system
built
on
top
of
k3.
So
this
is
really
cool
as
well.
See
I
really
want
to
do.
K,
3
and
K
3
OS
purpose
build
OS
for
kubernetes,
fully
managed
by
communities.
This
is
awesome.
I
really
want
to
play
with
this
stuff,
so
yeah
k3,
k3
OS,
thanks
Rancher,
exciting
new
technology
that
we're
gonna
take
a
look
at
hopefully
next
week.
So
there's
that
we
brought
this
up
once
before,
but
I'll
mention
it
again
for
shameless
self-promotion.
A
Joe
and
I
are
going
to
be
at
at
cube
con
in
Barcelona.
So
if
you're
gonna
be
there
come
say
hi
joe
and
I
are
doing
we
put
together
and
we
were
working
on
this
earlier
this
week,
so
we
actually
are
getting
pretty
excited
about
this
talk.
It's
gonna
be
exciting,
we're
doing
what
the
past
two
years
of
t
gik
has
been
like,
and
what
are
the
lessons
we
learned
and
how
did
we
get
to
where
we
are
today,
and
we
have
some
exciting
announcements
for
what's
to
come
in
the
future?
A
So
if
you're
gonna
be
around
you're
a
TGI
K
fan,
you
want
to
get
some
tea.
Gik
swag
come
hang
out
with
me
and
Joe.
It's
gonna
be
a
super
relaxed
talk
very
similar
to
TGI,
K
and
you'll,
get
to
see.
I
think
this
would
be
the
first
time
Joe
and
I
have
been
together
on
stage
so
you'll
get
to
see
our
exciting
energy
in
person.
So
yeah
come
check
us
out
in
speaking
of
cute
con,
and
this
is
this-
is
the
moment
we've
all
been
waiting
for.
A
Let's
go
back
to
our
doc
camera
speaking
of
coop
con
and
I
already
mentioned.
This
was
our
last
day
at
the
Seattle
office,
so
I
think
three
years
ago
it
was
one
of
those
having
Microsoft's
before
I
even
worked
here.
I
printed
off,
a
few
thousand
Charmander
could
burn
any
stickers
and
I'm
cleaning
on
the
desk
and
in
the
very
back
of
my
desk
I
found
a
brand
new
package
that
I
had
forgotten
about
of
unofficial
kubernetes
Charmander
stickers.
These
are
exciting
because
everyone
knows
pokemons
go
runs
on
kubernetes.
A
It
was
one
of
the
big
production
workloads
and
so
we're
big
fans
here
and
I.
Think
the
history
is
there's
a
Pikachu
one
in
a
Bulbasaur
one
as
well
and
I.
Think
the
original
one
came
from
Tim
Hawkins
over
at
Google.
He
did
Pikachu
anyway
I
liked
Charmander
he's
one
of
my
favorite
Pokemon.
So
if
you're
gonna
be
at
cube
con,
you
can
come
get
one
for
me.
So
I'll
give
every
person
I
meet
one
and
its
first
come
first
serve.
So
you
gotta
find
me
at
cube
con.
A
So
the
question
is
the
first
person.
So
the
way
this
is
gonna
go
is
I'm.
Gonna,
ask
a
trivia
question.
The
first
person
to
get
it
right
in
the
chat
is
going
to
be
the
one
who
gets
the
stickers.
Okay,
what
is
the
day
the
date?
What
is
the
date
of
my
first
commit
to
kubernetes
cops,
ready,
go,
see
who
wins?
I
don't
even
know
when
this
is
so
I'm
gonna.
Look
it
up
too.
So,
let's
go
over
here,
doo
doo,
doo,
kubernetes
cups.
A
Now
how
do
we
do
this
and
github
there's
a
way
to
do
it?
I
think
you
go
to
a
545
contributors,
yeah
Chris,
Nova,
Here
I
am
biz,
no
I
don't
want
this
I
want
my
kubernetes
cops.
A
A
October
15th
2016:
this
is
the
end.
This
is
my
first
commit
right
here:
removing
a
work
in
process
line
from
the
readme
yep.
So
my
first
commit
to
kubernetes
what
is
literally
a
one-liner
and,
let's
see
who
got
this
right,
it
looks
like,
let's
see,
I,
think
Ben,
moss,
Ben
moss
is
our
winner
right.
Oh
no
I
want
to
make
sure
I
get
this
right
in
the
chat.
Let's
see
who
sees
the
screen
share
and
types
fastest,
so
10,
15,
blah
blah
blah,
okay,
yeah
I.
A
Think
Ben
moss
did
it
too,
and
you
know
what
for
good
times.
Let's
do
another
one!
That's
not
going
to
be
1
2,
3,
4,
5,
6,
7,
8,
9,
10,
so
10
for
Ben,
and
let's
do
another
one
1
2
3
4
5,
6,
7,
8,
9,
10
there,
okay,
the
question
is
I'm
gonna
put
in
the
chat
so
that
everybody
sees
it
at
the
exact
same
time.
The
question
is
the
trivia
question.
A
All
right
there
you
go
so
whoever
gets
that
one
right,
I'll
send
ten
stickers
to
you
as
well,
and
then
the
rest
of
them
I
will
bring
with
me
to
keep
coins
and
congratulations
to
our
friend
Ben
Ben
moss,
who
got
the
the
first
one
right:
Sousa,
destra
million
in
Charizard,
correct,
okay,
so
servoz
is
our
next
winner,
very
good
test,
okay
cool!
So
let
me
write
this
down,
so
I
don't
forget
and
what
I
want
you
all
to
do
is
either
on
Twitter
or
kubernetes
slack
or
any.
A
A
How
did
we
spell
that
s
ad
and
then
a
bunch
of
Z's,
okay,
cool
Charmeleon,
fire
term
Charizard
fire
flying
who
holding
up
both
of
the
their
names
and
their
tights
but
I
think
hold
on
already
has
Pokemon
stickers,
so
that's
okay,
waleed
says
we
did
not
see
the
question.
Okay,
we
might
have
to
do
another
one.
Let
me
see.
A
Maybe
I
can
order
up
some
more
stickers
come
on
I
hope,
sort
of
person,
okay,
we're
getting
off
in
the
weeds
here,
let's
go
back
and
let's
jump
back
in
our
at
our
dock
here,
okay,
so
anyway,
that's
coupon
and
those
are
the
stickers
congratulations
to
Ben
in
Siddhas
and
everyone
else.
Thanks
for
playing
we'll
try
to
do
some
more
trivia
in
the
future
and
I'll
try
to
come
up
with
a
way
to
like
make
it
fair
for
everyone
like.
A
Maybe
there
can
I
can
drop
a
link
or
something
and
it'll
like
count
down
I,
don't
know
open
to
ideas
here
if
anybody
has
any
good
ones.
Okay,
so
last
but
not
least,
let's
have
a
good
transition
here
into
Keita.
So
this
is
the
blog.
If
you
want
to
learn
about
Keita,
come
here
and
read
this,
this
is
going
to
be
everything
I'm
gonna
say
today
is
written
in
here
and
a
lot
more
is
gonna,
be
written
in
here
as
well.
So
this
talks
about
everything
from
what
is
it
doing?
Why
are
we
auto-scaling?
A
It
has
some
really
fancy
graphs
and
it
comes
down
here-
and
you
can
see
like
this-
is
how
the
whole
system
axé,
how
everything
works.
We're
going
to
be
talking
about
all
this
in
detail,
so
I'm
going
kind
of
fast
right
now,
but
what
I
really
wanted
to
get
to?
Is
this
thing
here
the
scaled
object,
so
this
is
a
kubernetes
CRD
and
what
a
CRD
is.
Is
it's
literally
anything
you
want?
You
invent
these
from
thin
air.
A
You
can
put
whatever
fields
you
want
in
here
so
like,
for
instance,
if
we
pull
this
up,
let
me
see
if
I
can
zoom
in
there,
so
you
can
see
here
we
have
like
kind.
We
have
name,
we
have
speck,
we
have
conversion
groups,
we
have
names,
we
have
more
versions.
All
of
these
things
were
just
typed
by
a
person,
and
they
can
they
give
you
anything
you
want.
You
could
have
a
CID
that
represents
Pokemon.
You
could
have
a
CID
that
represents
a
server.
You
could
have
a
CID
that
represents
your
best
friend.
A
It
doesn't
really
matter.
It's
literally
like
a
schema
for
a
database.
That's
how
I
think
about
CR
DS
and
then
a
CR.
This
is
actually
an
example
of
a
CR
is
one
instance
of
a
CR
D.
So
like
a
row
in
a
database
right
and
you
can
have
multiple
rows
and
you
know
you
have
a
database
of
favorite
animals
and
row.
17
is
a
dog
and
Row.
18
is
a
cat
and
those
are
various
CR.
So
you
could
have
different
types
of
your
CR
s
of
your
CR
DS
called
c
RS
custom
resources.
A
So
how
does
this
different
or
complement,
k
native
I
think
this?
Is
it
deals
with
a
lot
of
the
same
problems,
k
native,
does
and
I
think
it's
just
kind
of
a
simpler
implementation
of
it
and
I
think
they
when
I
was
talking
earlier.
I
was
talking
with
Jeff
over
at
Microsoft,
and
one
of
the
things
he
really
hammered
a
home
was
how
Microsoft
in
Red
Hat
plan
on
growing
this
for
the
community,
so
I
think
that's
going
to
be
one
of
the
big
differences
between
K
native,
ok.
A
A
Actually,
let's
go
here:
I'm
just
gonna,
google,
it
kata
kubernetes,
github,
okay,
so
github.com
/
k,
decor
/
kata.
So
we
all
know.
My
favorite
thing
to
do
here
is
read
the
first
line
and
see
if,
if
I
can
tell
what
this
thing
is
and
what
it
does
from
the
first
line
in
the
repo.
So
kata
is
a
kubernetes
event-driven
auto-scaling
component.
It
provides
event-driven
scale
for
any
container
running
in
kubernetes.
So.
A
My
understanding
of
this
and
if
folks
are
on
the
phone
who
have
worked
on
kata,
please
please
PLEASE
jump
in
and
correct
me
and
keep
me
honest
here.
My
understanding
is
kata
takes
some
of
the
lessons
learned
from
serverless
functions
that
our
friends
at
Microsoft
had
built
up
an
azure
and
applies
them
to
kubernetes.
So
when
we
say
it's
an
event-driven
auto-scaling
component,
what
we
mean
is
there's
an
event.
A
There's
a
lot
of
things
you
could
do
and
have
your
software
do
that
then
you
can
take
that
and
plug
that
in
to
a
different
system-
and
you
can,
you
know,
have
a
loop
that
says:
oh,
oh
I'm,
watching
this
filesystem
and
watching
this
tile
system.
There's
nothing
there,
there's
nothing
there!
Oh
there's
a
file,
that's
an
event
just
says
thanks!
Chris,
that's
perfect,
so
yeah
so
arbitrary
events
that
we
can
then
plug
in
to
kubernetes
auto-scaling.
So,
let's
talk
about,
we
know
what
an
event
is.
A
Let's
talk
about,
what
auto-scaling
is
so
some
folks
at
home
have
heard
the
term
HPA,
which
is
an
it's
another
acronym,
I'm
not
a
super
big
fan
of
acronyms
I.
Very
much
prefer
verbosity.
Whenever
I
can
so
HPA
stands
were
horizontal
pod
autoscaler.
So
that's
actually
a
project
in
kubernetes,
horizontal
pod
autoscaler,
and
we
want
to
find
the
github.
A
Where
is
the
github
okay?
So
this
is
like
the
spec
and
how
it
works.
But
if
you
look
in
here,
there's
an
auto
scaling
algorithm
and
then
there's
a
way
where
you
can
plug
into
it
using
the
HPA
objects.
So
let's
just
see
next
steps,
you
do
overview.
Okay,
I'll
just
read
this
out
loud,
really,
quick
over
you.
The
resource
usage
of
a
serving
application
usually
varies
over
time.
Sometimes
the
demand
for
application
Rises
and
sometimes
it
drops
and
kubernetes
version
1.0
a
user
can
manually
set
the
number
of
serving
pods.
A
So
I'm
gonna
like
go
through
this
hypothetical
situation,
and
this
actually
happened
to
me
at
my
first
job.
In
my
career
we
were
an
online
e-commerce
store
and
I
was
like
writing
an
API
for
this
store,
so
learning
tech
technologies,
as
Waleed
Caden
native
still
spins
on
k2h
PA,
which
is
reactive
kata,
is
event-driven
autoscaler,
so
kata
also
uses
HPA,
which
is
what
we're
learning
about
right
now.
So
anyway,
what
happened?
In
my
first
jobs,
we
were
drop
shipping
company.
A
We
were
an
e-commerce
company
and
every
year
on
cyber
monday
our
site
would
go
down
because
so
much
traffic
would
come
in,
and
this
was
back
in
the
days
of
like
load.
Balancers
were
just
kind
of
like
the
new
hotness,
and
so
we
were
exploring
with
different
types
of
loads.
Like
I
think
the
yell
bees
in
AWS
just
come
out.
A
Well,
you
said,
thank
you
so
anyway,
what
happened
was
all
the
traffic
on
Cyber
Monday
would
hit
our
website
and
effectively
would
dose
our
site
well,
it
would
be
a
distributed
das
and
DDoS
because
it
was
coming
from
all
over
the
internet
and
there's
so
many
users
that
our
servers
couldn't
keep
up
and
our
site
would
go
down
and
our
server
would
quit
responding
and
in
some
cases
break
so
what
the
horizontal
pod
autoscaler
were
doing.
That
scenario
is,
it
would
say:
oh
my
gosh
CPU
is
going
crazy.
A
We
obviously
need
more
resources
here,
so
we
would
spin
up
more
pods
and
distribute
the
traffic
evenly
across
those
pods
in
the
hopes
that
our
CPU
is
going
to
go
in
exchange
for
our
amount
of
pods
going
up
so
you're
sort
of
doing
this
exchange
of
amount
of
instances
of
your
app
for
happier
CPU
right
and
that's
how
the
horizontal
pod
autoscaler
works.
So
in
this
case,
what
we're
doing
is
we're
taking
these
arbitrary
events
and
we're
doing
the
same
thing
as
on
an
arbitrary
event
happens
and
goes
up.
Hopefully,
our
autoscaler
also
goes
up.
A
So
this
is
a
direct
relationship
instead
of
an
indirect
relationship,
and
we
can
we're
going
to
try
to
demonstrate
that
today
with
RabbitMQ
and
go,
and
we
have
some
work
that
I'm
pretty
sure
Jeff
Holland
over
at
Microsoft
who
helped
me
out
earlier
today.
Thank
you
so
much
Jeff
that
he's
been
working
on
and
we're
going
to
look
at
the
code
see
what's
going
on
here,
run
some
tests
talk
about
kata
and
then
talk
about.
Maybe
some
other
examples
of
how
you
would
want
to
use
this
moving
forward.
A
So,
let's,
let's
dig
into
the
code.
Actually,
how
do
we
want
to
do
this?
I
think
we
dig
into
the
kata
code
and
get
an
understanding
of
what's
actually
going
on
here
and
then
I
think
we
go
and
we
actually
run
the
the
end
to
end
and
watch
it
live.
I
think
is
what
we
want
to
do
so
hold
on
one.
Second,
somebody
just
messaged
me,
Oh,
Ben
and
I,
see
messaged
me
their
addresses
in
kubernetes
slack
I
was
like
I
hope.
Everything
is
okay.
A
A
Okay.
So
let's
do
that.
Let's
look
at
the
code
and
in
fact,
why
don't
we
just
open
this
up
in
my
IDE
doo-doo-doo?
A
A
And
let's
see
here
so
this
is
gonna
be
and
Wow,
okay
cool.
So
this
is
gonna
check
out
Keita
in
art
gopath
for
us.
So
then
we
should
be
able
to
come
here
may
not
go
and
command-click.
That's
how
I
started
on
it's,
not
a
boy
awesome.
So
yes,
so
let's
come
here.
Let's
do
file
open,
let's
go
to
nova,
go
source,
github,
calm,
kata-kata,
open
this
up,
see
what
we
get
a
new
window
go
root
is
detected
and
let's
go
here.
Okay,
can
everybody
see?
This
is
the
size
good?
Is
everybody
happy.
A
So
I'm,
gonna,
guess
or
I
mean
yeah
right
here
in
the
CMD
directory,
so
for
folks
at
home,
we're
looking
at
go.
If
you
don't
right,
go
every
day,
you
probably
don't
understand.
What's
going
on
here,
so
I'll
try
to
kind
of
boil
it
down.
There's
some
common.
Like
parlance
in
the
go
programming
community,
the
CMD
directory
is
commonly
where
the
entry
point
to
your
go
program
is
going
to
be,
and
that's
where
you
will
define
all
your
commands
and
sub
commands.
A
The
pkg
is
like
the
guts
of
your
program,
so
these
are
all
of
your
libraries
that
this
main
file
is
probably
going
to
interact
with,
and
then
we
have
like
tests
and
images
and
some
examples
and
stuff
and
those
are
pretty
self-explanatory.
And
then
we
can
thank
kubernetes
for
this.
But
there's
this
lovely
thing
called
hack,
which
this
is
where
all
of
your
like
hacky
shell
scripts
and
all
that
noise
goes
and
then
I'm
assuming
there's
going
to
be
a
make
file
somewhere
in
here.
Yeah.
Here's
our
make
file
here.
A
A
We
can
do
make
tests,
we
can
do
to
make
eat
we
test
whatever
okay
well
and
then
here
we
can
use
goose
and
Gorge
to
define
amd64
and
Lennox,
so
you
could
actually
come
in
and
do
amd64
freebsd
and
this
would
compile
for
the
pre
PSD
colonel.
So
the
go
programming
language
is
pretty
slick
because
you
can
just
define
whatever
you
want
there
and
again,
that's
called
goose
in
Gorch,
so
the
environmental
variables
goose
and
Gorge
go
oh
s
and
go
architecture,
but
do
some
courts.
Obviously,
okay.
A
So
let's
go
to
our
main
go
and
actually,
let's
just
see
if
this
thing
we
even
build
so
change
directory
into
kita,
kore
change
directory
to
kita
and
let's
do
a
make,
build
and
see
what
happens
so
see.
Go
enable
equals
zero
goose
equals
1
inch
gorgeous
equals
amd64
go
build.
We
do
an
LD
flag,
so
this
is
just
populating
in
our
main
package.
The
variable
git
commit
with
our
commit
hash
here,
so
this
is
probably
how
we're
baking
our
commit
into.
A
A
Ok,
cool
so
yeah
that
that
actually
worked?
It
looks
like
so:
let's
see
where
it
went
so
here
do
we
have
like
a
an
output
file?
Maybe
let's
look
at
our
main
file
and
see
what
this
thing
does
so
make
file
wow.
That
was
really
cool
that
just
worked
out
of
the
box.
I'm
very
impressed
there.
Oh,
so
it
goes,
this
OH
means
the
output
is
dist,
kita
or
kata
seems
fine.
So
far,
okay,
thanks
for
letting
me
know
Ben,
so
I
bet.
A
If
we
go
into
this
here,
change
directory
into
dist
yeah,
we
can
execute.
This
cannot
execute
binary
file
because
it's
compiled
for
Linux.
So
if
we
wanted
to
come
here,
luciano
says
sorry
I
can't
hear
you
just
kidding
thanks
Lee
Geon!
Oh
you
know.
If
we
wanted
to
move
this
to
a
Linux
machine
we
get
executed
or
another
workaround
here
would
be
to
go
up
here.
We
still
on
AMD
64,
and
then
we
just
changes
this
to
Darwin,
because
I'm
running
on
my
macbook
and
I
bet.
A
If
we
go
up-
and
we
do
another
make
bells
we'll
be
able
to
execute
that
one
so
everybody's
saying
my
computer's
good,
so
we're
gonna
let
this
compile
I'm,
not
gonna,
worry
about
lag
and
let's
go
and
start
looking
at
the
source
code
why
this
is
building
okay,
so
main
doc,
oh,
what's
going
on
in
here
yeah!
So
here's
that
get
commit
var
that
I
just
explained
with
our
LD
flags.
So
this
will
actually
get
populated
whenever
we
build
our
program
and
effectively
get
hard-coded
every
time
you
run
the
program.
A
So
that's
how
you
get
the
get
commit,
hash
and
I
bet
something
yeah
right
here
somewhere
in
the
program.
We're
just
gonna
echo
this
out
the
k2
version
in
the
get
commit
down
here.
So
that's
how
that
works.
So,
let's
see
main
function.
We
do
print
version,
which
is
this
little
function
down
here
and
that
just
does
some
just
log
some
stuff,
pretty
easy.
We
admit
logs
I'm,
assuming
this
just
sort
of
instantiates
our
logger.
We
deferred
logs
flush
logs,
this
all
looks
fine.
Then
we
call
kubernetes
get
clients.
What
is
this
now?
A
We
grab
some
context.
We
initiate
a
new
scale,
handle
there,
so
I'm,
assuming
the
scale
handler
is
the
thing
that
actually
will
handle
interacting
with
the
HPA
and
getting
all
of
that
nice
and
happy
for
us,
and
then
we
we
in
a
concurrent,
go
call.
We
do
controller
new
controller
and
then
we
do
run
with
our
context.
Okay,
so
this
line
is
like
the
guts
of
the
program
here
and
we
run
it
concurrently,
which
is
interesting,
and
then
it
says:
ok,
so
it's
running
if
adapter
new
adapter
run
it
does.
A
This
other
run
thing
which
we
can
go
in
and
dig
and
see
what
this
run
function
does.
Oh-
and
this
is
just
an
if
statement,
so
it's
unable
to
run
the
custom
metrics
adapter,
ok.
So
this
is
what
its
gonna
build:
the
custom
metrics
adapter,
that's
gonna
talk
to
the
HPA
and
then,
instead
of
actually
waiting
on
this
go
line
here,
usually
you'll
see
like
a
go
and
then
like
we'll
have
like
a
channel
that
you'll
hang
on
and
when
the
go
routine
is
done.
A
It'll
like
send
a
a
true
or
a
false
over
the
channel
and
that'll
break
our
code.
Here
it
looks
like
we
just
do
it,
let's
sleep
effectively
for
5
seconds,
and
then
we
terminate
our
program
blind
so
I
I
would
not
want
to
see
this
in
production
code
in
real
life,
just
because
in
theory
this
could
still
be
executing
and
we
shut
down
our
program
prematurely.
A
So
you
could
do
something
like
you
could
come
in
here
and
you
could
do
like
a
let's
see
you
make,
will
call
it
CH
an
and
it'll
be
actually
I
mean
need
to
go
into
it,
but
basically
you
and
you
would
make
a
channel
here
pass
it
into
this
function
here
and
then
here
you
would
just
basically
do
this,
and
whenever
this
ending
in
this
function
you
would
just
have
it
return
something
across
your
channel
and
then
your
code
move
forward
and
there's
some
examples
of
that.
We
can.
A
We
can
talk
about
later,
I
don't
want
to
get
to
off
in
the
weeds
here
so
yeah.
So
basically
the
high-level
pattern.
Here,
let's
draw
a
quick
picture,
so
we
can
see
what
the
program
is
doing
is
Chan
bull,
yeah
Thanks,
so
here
Ashish
is
helping
me
out
here.
Chan
bool.
Do
you
do
so?
This
is
what
we
and
then
we
would
pass
this
in
its
run.
So
maybe
we
wouldn't
change
our
function
to
accept
a
channel
here
and
then
you
know
down
here.
A
We
would
do
effectively
that
and
then
somewhere
in
our
code
we
would
have
a
line
that
takes
the
channel
and
goes
like
this
BAM,
and
so
until
this
line
is
executed
in
the
previous
function,
this
will
hang
and
that's
how
we
could
do
concurrency
there
and
go
makes
that
unbelievably
easy
for
us.
We
don't
need
a
mutex.
We
don't
need
anything
at
all,
so
thank
you
go
and
then
yeah
there's
a.
She
should
quit
true
yeah.
So
thanks
for
helping
me
out
of
here,
but
that's
how
we
would
do
it
and
yeah.
A
So
let's
draw
a
picture
and
was
high
level
to
sing
for
everyone.
Let's
go
to
our
camera.
Here
ensued
as
here:
here's
our
pokemons
friend
dudu,
and
if
we
were
looking
at
our
main
function,
so
the
kind
of
the
first
thing
we
do
here
is
we
get
two
clients
we're
going
to
call
one
for
kata
and
we'll
call
this
one
see
for
client
go.
A
So
we
get
two
clients,
then
we
go
down
further
in
the
code
path
and
we
do
a
new
handler,
so
I'm
gonna
kind
of
draw
a
circle
for
functions,
and
these
are
variables.
So
we
have
a
handler
and
then
we
have
an
adapter
and
what
the
handler
does
is
it
handle
it's
called
it
a
scale
Handler
scale
handle
and
then
the
adapter
refreshing,
the
concurrency
module
I
did
on
tour
of
go
nice,
the
scale
Handler,
and
then
here
we
have
the
the
adapter,
and
so
what
the
adapter
does
is.
A
This
is
going
to
basically
run
the
custom
metrics
adapter
and
this
is
going
to
handle
generating
our
CID.
So
if
I
had
to
guess,
I
would
say
the
scale
handler
is
going
to
be
what
is
going
to
create
and
Jeff
I
want
you
to
answer
this.
For
me.
If
you're
still
here,
I
think
the
scale
handler
is
what
is
going
to
create.
A
Wait
for
it
wait
for
it.
That's
the
scaled
object.
A
So
again,
I
think
this
is
going
to
come
from
this
part
of
our
code
here
and
then
the
the
adapter
is
just
what
plugs
this
that
we've
already
created
into
the
HPA,
so
the
HPA
will
go
and
do
all
of
its
magic
and
scaler
pods
for
us
now.
What
you
need
to
understand
here
is
there's
two
layers
of
stuff
going
on.
A
So
let's
say
this
bottom
half
of
our
page
here
is
a
kubernetes
cluster
right,
so
let's
say
we
have
a
couple
of
nodes,
so
here's
node
number
one,
here's
node
number
two
and
here's
node
number
three,
and
these
are
like,
let's
just
say,
they're
all
t2
larges
running
an
ec2
right.
You
know
some
fairly
large
machine.
You
know
a
couple
of
course.
Maybe
a
few
gigs
I
ran
no
big
deal,
nothing,
nothing
they're
like
freak
out
about,
and
let's
say
that
right
now
we
had
three
instances
of
our
application.
A
What
kubernetes
would
do
is
they
would
run
one
instance
here
in
a
container.
It
would
run
the
second
one
here
in
a
container
and
it
would
run
the
third
one
here
in
a
container
and
then
we
would
load
bounce
our
traffic
across
all
three
of
these
using
some
load
balancing
algorithm
like
round-robin,
and
it
would
just
go
around
it
in
a
circle
and
say:
Oh
next
request
goes
here.
Next
request
goes
here.
Next
request
goes
here.
A
Next
request
goes
here:
Orion
might
there's
other
algorithms
that
are
smarter
than
that,
and
we
can
do
a
whole
episode
on
load
balancing
later
anyway.
What
at
the
point,
I'm
trying
to
make
right
now
is
if
we
created
a
fourth
instance
of
our
application,
which
is
what
the
HPA
the
horizontal
cloud
autoscaler
would
do
is
it
would
change
the
amount
of
replicas
we
want,
and
it
would
scale
that
up
and
down
if
we
created
the
fourth
one
kubernetes
picks
a
server.
So
let's
say
kubernetes
picks
this
server
and
there's
a
whole
algorithm
into
play.
A
That
goes
on
to
how
this
decision
is
made,
and
we
honestly
should
probably
just
do
a
TGI
Kay
on
that
algorithm
alone,
because
that's
some
fascinating
software
right
there.
So
that
was
our
fourth
one
and
then
don't
say
our
fifth
one
goes
here.
So
who
can
guess
where
the
sixth
one
is
going
to
go?
It's
gonna
go
here,
so
the
point
I'm
trying
to
make
is
kubernetes
does
its
best.
It
can
to
distribute
your
load
evenly
now
the
problem
comes
in
is,
if
you
said,
I
wanted
to
do.
A
My
replicas
and
I
wanted
to
set
that
to
like
1
million
kubernetes
would
continue
to
deploy
containers
on
each
one
of
these
nodes,
but
at
some
point
we
would.
This
is
a
finite
amount
of
system
resources
at
some
point.
We're
gonna
we're
gonna
cap
out
all
of
these
servers
here.
So
in
a
weird
way,
cluster
API
is
two
servers
as
the
horizontal
pot.
Autoscaler
is
two
pots,
and
so
what
the
cluster
api
would
do
is
they
would
say,
oK
we've
set
replicas
equal
to
a
million.
A
We
obviously
need
some
more
servers
here
and
then
kubernetes
would
go
and
start
distributing
across
all
of
these
servers
until
we
were
able
to
meet
this
replica
count
here.
So
pods
are
the
layer
of
software
on
finite
resources
and
nodes.
Are
the
layer
of
hardware
that
runs
those
resources,
so
we
got
to
make
sure
we
have
a
clear
understanding
of
that,
because
we're
gonna
go
into
the
pot
autoscaler
bit
a
lot
in
a
second
so
anyway,
this
is
a
kubernetes
by
default.
A
Kubernetes
gives
you
this
layer
of
handling
and
we're
working
on
getting
this
one
solved
instant,
closer
to
life
cycle
today,
ok,
so
that
was
my
little
rant
there.
Let's
go
back
not
to
my
face
to
my
screen
and
it
does
that
by
interacting
with
the
CRD
here
that
we
call
a
scaled
object.
So
let's
go
back
to
our
code
and,
let's
validate
that,
our
assumptions
are
correct,
so
new
scale
object.
We
said
this
should
actually
create
our
CRD
and
in
all
this
does,
is
it
returns
a
handler?
A
A
You
can
see
that
we're
actually
just
doing
this
with
context
in
a
concurrent
go
function
here,
so
our
aathi
says
hello,
one
of
the
doves
on
kada,
oh
hi,
rafi,
please
jump
in
and
keep
me
honest
to
make
sure
I'm
explaining
all
this
correctly
at
any
point,
if
I,
if
I
mess
up
or
use
the
wrong
word
or
something,
we
want
to
make
sure
we're,
giving
accurate
information
here
so
feel
free
to
jump
in
and
correct
me,
everybody
at
home
is
gonna,
appreciate
it,
including
myself,
okay,
so
this
creates
our
scaled
object,
CR
and
then
we
have
this
adapter.
A
Let's
go
into
this
run
function
here,
and
here
we
have
a
stop
channel,
which
is
a
similar
concept
to
the
concurrent
pattern
we
looked
out
earlier.
But
this
is
will
actually
just
stop
the
function,
and
then
we
have
a
generic
API
server,
prepare,
run
and
run
and
we'll
see
what
this
run
does
run
spawns
the
secure
HTTP
server.
It
only
returns
the
stop
channel
if
closed
or
the
secure
port
cannot
listen
listening
on
initially.
A
A
Aaron
wrathy
says
hello
and
again
thank
you
for
joining
us
would
be
very
much
appreciate,
appreciate
it.
Okay,
so
it
doesn't
look
like
anybody
has
any
questions
or
commentary
if
you
do
feel
free
to
jump
in
and
I'm
gonna
go
ahead
and
get
started.
So
what
we
have
here,
let's,
let's
actually
go
and
do
this
cloud
console
google.com,
there's
a
console,
it's
console
got
cloud,
isn't
it
yeah?
Okay,
I
always
always
get
that
wrong.
So
here
in
Google
we
went
and
I
just
spun
up
the
world's
simplest
kubernetes
cluster.
A
It's
a
three
node
cluster
US
central
1a
and
it's
called
TGI
K,
o
76.
We
come
down
here
back
in
our
terminal.
Oh
let's
see.
If
we
can
execute
this
now
perfect.
Okay,
so
it
doesn't
so
we
were
able
to
compile
it
okay.
So
if
we
come
down
here,
we
all
know
I
elias
k
is
equal
to
ki.
Bechdel
and
I
can
do
okay
get
nodes
and
we
can
validate
that.
A
A
Okay,
so
would
that
being
said,
you
can
gain
one
of
the
first
commands
I
like
to
run
when
I
walk
up
to
a
kubernetes
cluster
I,
usually
run
by
alias
K
dump,
which
is
basically
this
K
dump
is
equal
to
ki,
Bechdel,
get
all
all
namespaces
and
that
doesn't
get
everything,
but
it
gets
most
things
enough
to
where
you
can
start
to
understand.
What's
going
on
in
your
cluster,
oh
so
a
Roth
II
does
have
a
comment.
Okay,
you
got
it
right.
A
The
main
components
are
controllers,
and
this
listens
on
CID,
create
an
update
in
the
handler
it
handles
activation
deactivation
in
HP,
a
and
scalars
in
the
adapter,
okay,
so
Oliver
aathi
is
basically
saying
there
is
he's
just
confirming
this
diagram
that
we
looked
at
a
little
bit
earlier,
with
the
scale
handler
in
the
adapter
as
well,
and
he
says,
creates
updates
a
new
ease
and
that
when
he
says,
create,
update
and
believes
he
means
this
is
basically
got
crud
on
our
CID.
That
is
a
scale
object.
I
know
my
handwriting
is
messy.
A
That's
how
you
know
I'm
a
good
engineer.
Okay,
so
there's
that
let's
go
back
to
my
screen,
and
so
now
we
can
just
run
Kadem
but
even
more
appropriate
than
Caden.
If
you
really
are
just
walking
up
to
a
kubernetes
cluster-
and
you
just
want
to
get
an
idea
of
what
in
the
heck
is
going
on
in
this
cluster,
okay
get
namespaces.
A
What
namespaces
do
you
have,
and
here
you
can
see
we
have
Keita,
which
is
terminating.
This
does
not
look
good,
probably
the
finalizar
issue
that
we
ran
into
last
week.
A
Ashish
says:
I
think
you
were
pronouncing
her
name
incorrectly.
How
can
somebody
help
me
pronounce
it
correctly?
March
2009
CRM,
one
I
think
you
were
pronouncing
her
name
incorrectly,
okay,
so
yeah.
If
somebody
wants
to
kind
of
do
the
phonetic
spelling
on
I'll,
try
to
correct
that.
A
Thank
you
and
we
might
run
into
some
problems
with
kata
here
in
fact,
kay
delete,
namespace,
yeah,
okay
yeah.
So
let's
do
this.
We're
gonna
create
a
fresh
cluster.
I,
don't
know
what
was
going
on
with
this
cluster
earlier
I,
don't
really
want
to
debug
it,
but
we
can
talk
about
what's
going
on,
why
this
thing
spins
up
and
it
should
spin
up
pretty
quick,
okay,
so
yeah
we'll
just
do
that.
A
Okay,
so
back
in
our
terminal
here
we're
gonna
start
to
run
our
command
and
if
we
get
into
trouble
by
the
time
we
get
into
trouble
well,
we
should
have
a
fresh
brand
new
cluster
with
nothing
running
on
it
for
us
to
play
with
okay.
So
here
we
have
the
chart
and
we
scroll
down
and
we
have
RabbitMQ
and
go
so
ashish
says
roc.
No,
a
after
our
a
RC
air,
see
our
air
see
our
C
okay.
Is
it
I
want
to
get
this
right?
Is
it
air
C
or
is
it
our
sea.
A
Okay,
so
here
we
have
rabbitmq
consumer
and
sender,
so
basically
what
this
example
is
going
to
do.
I'll
just
read
it
to
you,
a
simple
docker
container
that
will
receive
messages
from
a
rabbit
in
QQ
and
scale
of
AI
akkada.
The
receiver
will
receive
a
message
at
a
time
per
instance
and
sleep
for
one
second
to
simulate
performing
work
when
adding
a
massive
amount
of
Q
messages.
Keita
will
drive
the
container
to
scale
out
according
to
the
event
source.
A
Remember
we
talked
about
events
earlier
last,
one
Aarthi,
okay,
RC
got
it
Aarthi,
r,
zr,
z,
okay,
so
all
that
this
is
really
saying
is
I
feel
like
I
should
draw
another
picture,
but
I'm
gonna,
try
not
to
all
this
is
saying.
Is
we
have
a
rabbit
queue?
We
can
put
some
messengers
in
the
queue.
Let's
say
we
put
10
messages
in
the
queue
and
in
order
for
us
to
do
something
with
that,
what
we're
calling
work
or
those
10
messages
we
have
to
create
10
containers.
A
Each
one
will
pull
the
message
down
from
the
message:
broker,
rabbit
and
we'll
sleep,
her
second
and
then
terminate,
and
that
is
simulating.
Like
a
program
that
would
like
be
processing
a
request
or
something,
and
so
if
we
threw
a
thousand
messages
in
the
queue
you
can
see
how
how
exciting
we
can
start
to
design
these
systems
for
some
load
testing
and
again
we
have
various
events
that
we
can
plug
into
Keita
I'm
curious
there
and
arthi.
Maybe
you
can
answer
this
question
for
us,
but
we
have
a
few
events.
A
In
the
same
way,
we've
sort
of
handle
various
checks
on
our
pods
and
kubernetes,
like
the
readiness
probes
and
the
healthy
endpoint,
and
things
like
that.
But
anyway,
let's
go
back
here,
RabbitMQ,
so
we're
gonna
start
by
going
through
this
setup
here
and
says
prerequisites.
We
need
a
kubernetes
cluster
which
we
have
one
it
may
or
may
not
work
for
us,
and
then
we
need
to
install
Keita
on
it.
Yes,
that's
right.
Each
of
these
need
a
scalar
to
be
written.
Okay,
so
Aarthi
confirms
it.
A
So
looks:
let's
see
if
we
can't
find
a
scalar
coded
wire
here,
scalars
perfect.
A
Okay,
so
here
we
have
here's
our
RabbitMQ
scalar,
ok,
cool,
it's
a
plug-in
system,
but
it's
not
dynamic,
currently
good
feedback.
Thank
you.
Thank
you
for
taking
that
feedback.
That's
so
amazing!
So
here,
if
we
look
in
the
Keita
repository
member
earlier
I
mentioned
pkg
or
packages
like
the
guts,
where
all
the
libraries
are
here,
we
have
a
package
called
scalars,
and
in
here
you
can
see
we
have
RabbitMQ
Kafka,
the
azure
server's
bus
in
the
Azure
queue.
And,
of
course,
if
we
go
back
to
the
repo,
we
go
back.
A
Rabbitmq
asher
how's
your
queues
Kafka,
it's
all
here,
so
we're
going
to
want
to
look
at
the
rabbit,
MQ
scaler,
and
so
let's
see,
if
we
can't
kind
of
understand,
what's
going
on
here
so
I'm
assuming
there's
got
to
be
an
interface
for
this
somewhere.
I
bet.
That's!
What's
in
scaler,
go
gosh!
Okay,
so
really
quick!
Aarthi!
Your
code
is
gorgeous
I
love.
The
way
you
have
this
thing
set
up.
A
This
is
100%
exactly
how
I
would
do
it
good
show
here
with
having
your
scalar
interface,
exactly
where
I
would
want
to
put
it
and
it
just
popped
up.
Almost
like
magic
when
I
was
like
I
wonder
if
this
is
it
so
anyway,
here
we
have
the
scalar
interface,
we
have
get
metrics,
we
have
get
metrics
spec
for
scaling.
We
have
is
active
and
close,
and
here
in
the
rabbitmq
scalar.
How
much
do
you
want
to
bet
we're
going
to
be
able
to
find?
A
We
have
a
new
function
and
then
I
bet
we
can
scroll
down-
and
here
is
our
is
active.
Here's
our
get
metric
spec
for
scaling
get
metrics
and
there
was
probably
one
more
I'm
missing
clothes
here.
It
is
so
every
one
of
these
scalars
is
going
to
implement
this
interface
and
it's
gonna.
Have
these
functions
defined
and
all
it's
going
to
say
is
get
metrics
spec
for
scaling.
That's
just
going
to
be
mean
a
different
thing
in
the
case
of
rabbit
than
it
would
be
for
anything
else.
A
So
we
could
literally
write
a
scalar
today
that
checks
if
a
file
exists
or
checks.
If
Twitter
is
up
or
I
mean,
does
virtually
anything
we
want
so
Jeff
says:
rabbitmq
Kafka
service
bus,
AWS,
sqs
scalar
is
being
worked
on
yeah,
so
folks
at
home.
If
you,
if
you
want
to
use
kata
and
you
want
to
contribute
a
scalar,
it
looks
like
it's
set
up
and
it's
gonna
be
pretty
easy
for
you.
A
All
you
would
have
to
do
is
come
in
and
add
one
of
these
scalar
files
in
a
scalar
test
go
so
you
can
test.
Your
scalar
looks
like
the
rabbiting
cube,
doesn't
have
a
test,
whoever
wrote
the
rabbit
and
you
should
write
tests
shaking
my
finger
at
them
anyway.
So
that's
how
the
scalars
work-
and
this
allows
us
to
map
arbitrary
things
very
conveniently
to
the
rest
of
our
kata
system,
which
again
is
going
to
talk
to
our
horizontal
pot
autoscaler
here.
A
So
really,
all
kata
is
doing
it's
just
gluing
together
some
sort
of
arbitrary
action
or
thing
with
the
horizontal
pod
autoscaler,
and
it's
just
defining
like
a
rigid
structure
to
make
that
interaction
as
seamless
as
possible.
Okay,
so
those
are
scalars
in
kata.
So
let's
go
here.
Actually,
let's
just
check
and
see
what's
going
on
here,
okay,
so
this
is
already
up.
So,
let's
connect
here,
Boop
and
I'm
just
going
to
nuke
my
cube,
config,
cube,
config
and
we'll
describe
this
really
quick.
A
Do
k
get
nodes,
I,
don't
know
why
the
Internet's
going
slow
today,
there's
our
brand
new
cluster
that
we
just
went
up
a
few
moments
ago:
okay,
cool!
So
let's
go
back
to
kata.
We
now
have
a
brand
new
cluster
with
nothing
installed
on
it,
that
I
haven't
broken
yet
and
the
finalizar
is
for
the
namespace.
Are
nice
and
happy
in
the
namespace
has
never
even
been
created,
so
we
should
be
good
there.
So
the
first
thing
we
want
to
do
is
install
kata.
A
So
we
have
this
bit
done
Cabrini's
cluster,
and
now
we
want
to
install
kata
on
the
class
and
then,
of
course
deployed
with
a
helmet
art.
So
in
order
to
install
kata,
it
looks
like
I
mean
in
theory:
I
have
a
binary
right
here.
I
could
just
like
cue,
Bechdel
run
this
thing
if
I
had
it
in
a
container,
but
what
the
folks
over
at
kata
have
done.
A
They
have
used
a
tool
called
helm
which
basically
is
an
abstraction
on
top
of
Y
Amal
and
allows
you
to
template
yanil
and
then
define
like
variables
and
a
little
bit
of
logic
for
your
kubernetes
resources
and
we
can
glue
all
those
together
and
we
call
that
a
release
and
it'll
release
as
an
idea.
That's
important
to
humans.
So
in
this
case
our
important
idea
is
kada
and
we're
gonna
use
a
tool
called
helm
debt.
A
To
add
that,
so,
let's
see
what
we
got
going
on
here
and
yeah
I
used
to
work
ideas
with
all
the
helm
folks
way
way
long
ago,
they're
now
at
Microsoft
and
honestly,
some
of
my
best
friends
in
the
world.
In
fact,
one
of
one
of
the
folks
from
from
that
team
is
how
I
got
in
touch
with
Jeff,
and
it
was
just
like
a
trip
down
memory
lane.
It
was
just
really
great
didn't
get
to
talk
to
an
old
friend
again,
okay.
So
anyway,
let's
see
Jeff
has
a
question
here.
A
It
says
a
huge
good
question:
we
do
have
a
release
out
where
right
we
need
to
shore
up
the
system,
to
sneak
with
github
releases
and
learn
tech
to
innovate,
says
Jeff.
Our
I
see
how
it
is
kinda
different
from
the
customers,
custom
metrics
in
apa,
okay.
So
all
that
those
two
into
that
question.
We're
gonna
go
back
to
helm
here.
So
I've
already
done
this,
but
we'll
do
it
again
for
good
measure.
A
So
the
way
home
works
is
you
need
a
helmet,
oh
so
I
helm
was
originally
designed
to
sort
of
be
like
apt-get
for
kubernetes
and
the
same
way
with
most
package
managers.
You
can
add
arbitrary
repositories
and
all
our
repository
is
is
just
an
HTTP
I
think
that's
server
that
is
just
set
up
in
a
special
way
that
hell
knows
how
to
go
and
look
for
various
things
that
the
server
can
get
it,
and
if
the
server
is
set
up
correctly,
then
helm
will
treat
that
as
a
valid
repository.
A
So
we're
going
to
add
a
new
reference
and
all
of
that
information
is
Stoke
is
stored,
someone
invasively,
but
here
in
the
helm
directory
on
my
computer.
You
can
see
here
we
have
repository
and
in
here
we
have
local
cache
repositories
that
ya
know,
and
if
we
of
course
cut
out
posit
or
ECMO
you
can
see
here
we
have
all
of
like
kubernetes
charts.
We
have
my
repository.
A
We
have
Jenkins
X,
we
have
Keita
Corps
that
I've
already
added
and
let's
go
back
to
our
go
pass
and
that's
how
helm
sort
of
stores
all
this
made
it
information
about
it,
Keita,
okay,
cool,
so
we're
gonna,
do
our
helm,
repo
add
command
duty,
do
which
is
here,
helm,
repo,
add
kata
core
has
been
added,
although
it
was
already
there
and
then
we
do
a
helm,
repo
update,
which
this
is
one
of
those
things
about
helm,
that
you
just
got
to
kind
of
learn
to
get
used
to
doing,
and
this
will
just
like
pull
updates
from
each
of
these
repositories
and
stash
them
locally,
and
that
directory
we
just
looked
at
and
then
here's
our
install
command
here.
A
A
What's
going
on
here-
and
this
is
one
of
the
things
that,
like
grain
and
hell,
makes
things
a
lot
more
convenient,
but
also
kind
of
blinds
you
from
what's
actually
going
on,
but
you
run
into
the
same
issue
with
any
package
manager
like
if
you're
on
at
your
system,
and
you
do
and
have
to
get
update,
you're
blindly
downloading
packages
from
somebody
on
the
internet
who
just
said
yeah.
These
are
pretty
good.
A
You
should
run
these,
but
what
you
don't
really
know
what's
going
on
there
so
anyway,
if
we
wanted
to
find
out
where
this
was,
we
can
go
to
this
server
here
or
what
this
was
more
importantly,
OOP
and
it's
an
XML
server
I
think
there's,
like
you
have
to
like
send
a
post
request
or
you're
like
helm,
has
some
weird
protocol
that
I
don't
know
off
the
top
of
my
head,
but
regardless
there's
gonna,
be
a
chart
hosted
on
this
server
somewhere
and
Helm's
gonna
be
able
to
recognize
that
and
all
its
chart
is
is
if
we
go
back
to.
A
Let
me
close
some
of
this
stuff.
If
we
go
back
to
the
repo
for
Keita,
let's
close
that
and
let's
go
back
to
our
cater
view:
PO
where's
github,
let's
type
it
in
here
there
we
go.
This
is
what
I
want
we
go
back
here.
You
can
see.
We
have
this
chart,
slash
Keita,
and
this
is
actually
a
representation
of
water
running.
So
the
way
that
helm
works
is
here
in
the
templates
directory.
These
are
all
kubernetes
resources.
So,
if
you've
ever
seen
a
kubernetes
deployment
before
this
deployment,
yanil
will
look
familiar.
A
You've
ever
seen
a
kubernetes
service.
Before
this
service
animal
will
look
familiar
and
if
you
go
up
a
directory,
this
is
where
you
define
the
template.
Abstraction
e
bits
that
helm
likes
to
use
in
this
file
here
called
values,
got
gamal
in
the
way
that
this
works
is
helm,
will
take
this
gamal
file
and
plug
it
into
these
EML
files.
A
Using
this
templating
thing
called
text
template
and
you
get
this
lovely
like
jumble
of
gamal
and
text
templating,
that's
really
hard
to
read
and
in
order,
if
you
actually
understand
what's
being
deployed,
you
have
to
go
and
look
up
a
kita.
Full
name
is
going
to
get
set
to
name
here
in
our
deployment
and
so
I
go
up
to
directories.
I
go
into
value,
cite
yeah
ma
I,
look
for
the
kita
object,
which
is
not
here,
I,
don't
see
it
named
override
full
name
override.
Maybe
it's
here.
A
Where
is
this
thing
on
I
bet?
Maybe
you
pass
it
in
when
you
do
your
install?
Let's
see
if
that's
the
case
to
kata
installed,
so
we're
back
here
again:
yeah
namespace,
kata,
okay
and
then
name
is
Catoe.
So
yeah
you
just
pass
in
these
various
things
here.
Whenever
you
do
your
install
okay,
so
we
could
come
through
and
we
could
go
back
to
our
github
page
here.
A
Let's
do
cater
core
and
we
could
actually
see
what
all
of
our
is
being
set
up
here.
So
we
do
a
cluster
role,
binding
and
I'm
gonna
kind
of
go
fast
here,
but
I've
just
I've
worked
with
helm
a
lot
in
the
past
when
I
was
working
on
that
team.
So
I
can
read
this
stuff
pretty
well
and
I'm.
Just
gonna
try
to
explain
what
we're
about
to
install
on
our
cluster,
so
a
cluster
role
binding.
This
will
probably
be
the
20th
time.
A
I've
said
this:
it's
like
a
bridge
table
in
a
relational
database,
it
Maps
a
user
or
a
person
or
an
idea
to
some
this
level.
That
says
you
can
get
on
this
resource,
but
you
can
list
on
that
resource,
but
you
can't
get
on
that
resource,
so
sure
you
can
get
pods
sure
you
can
get
services,
but
no
you
can't
get
deployments
or
whatever
the
case
is,
and
all
the
cluster
role
binding
is
just
a
glue
that
sort
of
matches
those
two
together,
I.
A
Think
of
like
the
old,
like
operators
and
like
the
phone
booth
like
patching
and
all
the
patch
cables
and
connecting
people.
That's
a
cluster
role,
binding.
Okay,
so
all
that's
doing
isn't
setting
up
our
our
Mac
for
us
in
the
k2
namespace,
which
is
good.
We
want
to
see
our
back,
that's
happy.
We
have
a
custom
API
service,
so
this
is
an
API
service
and
basically
we
have
version
priority
priority
group
minimum
insecure,
skip
TLS
verify,
so
this
has
got
to
be
some
sort
of
a
custom.
A
Oh
it's
a
custom,
metrics
thing
here,
so
we're
probably
using
that
to
define
some
service
level
configuration.
We
have
a
deployment
which
this
is
not
too
terribly
long,
but
you
can
see
here
what
I
really
care
about
is
the
values,
image,
repository
and
values,
image
tag.
So
if
you
come
up
here
to
templates,
I'm,
sorry
to
kata
and
go
into
values,
yeah
Mel,
this
is
values,
and
then
we
have
image.
Kata
core
and
tag
is
latest,
so
that's
actually
defining
where
we're
pulling
this
container
for
this
deployment
and
I'm.
A
Assuming
all
that
this
is
if
we
go
look
at
our
make
file,
that's
where
we're
gonna
connect
everything
together
from
our
main
function
that
we
just
looked
at
all
the
way
to
this
helm
command
here.
So
that's
the
glue
so
anyway,
I've
ran
this
so
we're
happy,
but
there
you
can
go
and
you
can
look
through
everything
and
see
how
everything
works
and
check
it
out.
A
If
you
want
to
and
now
we're
going
to
do,
our
helm
install
do
do
and
of
course,
if
we
go
back
and
we
look
at
kata,
you
can
see
there's
a
handful
of
other
things,
I've
already
gone
through
and
and
kind
of
scoped
these
out
and
we're
getting
close
on
time.
So
I
want
to
kind
of
hurry
up
along
here,
but
we
do
to
find
a
service.
We
have
a
service
account
which
this
is
the
person
in
the
are
back
example,
and
then
we
have
our
scale
object.
A
Cid,
which
remember
the
scaled
object
is
the
unique
schema
for
this
specific
project
and
we
define
all
of
that
and
then
we
do
our
helm,
install
and
that
command
is
running
here
and
what
we'll
see
is
well
actually
spit
out.
All
of
the
things
we
did
here,
Alan
Fraser
says
helm,
debug,
dry
rom
helps
me
the
rendered
template
for
those
of
us
who
struggle
to
read
the
raw
charts
so
yeah
so
Alan's.
A
Just
basically
saying,
if
you
check
out
the
chart
locally,
you
can
do
helm,
debug,
dry,
run
and
it'll
render
the
template
and
echo
it
out
for
you,
so
you
don't
have
to
do
that.
A
mental
map
like
I
was
just
doing
so
yeah.
This
is
in
stuff.
Oh,
you
know.
What's
going
on
here,
I
need
to
install
helm
on
my
server.
So
I
have
this
lovely
bash,
alias
here
that
will
install
helm
and
then
we
can
do
home.
A
So
basically
the
well
the
way
helm
works,
and
this
is
a
this-
is
just
a
pattern
that
they
picked
a
long
time
ago.
It
installs
this
thing
called
tiller
in
the
root
namespace
of
your
cluster
and
it's
a
G
RPC
server
until
er
is
what
actually
manages
all
of
the
things
and
does
all
the
interaction
with
your
cluster
and
helm
is
just
a
thin
light,
lightweight
client
that
interacts
with
tiller,
and
so
that's
really
what's
going
on
here
and
keep
back
to
apply
without
without
running
tiller,
so
yeah.
A
So
here
so
a
she
she's,
just
saying
you
could
do
aq
back
to
apply
of
tiller,
isn't
running
and
you'll
still
get
your
rendered
template.
I
think
the
HMMWV
bug
dry
run
is
a
little
bit
makes
a
little
bit
more
sense
for
me,
okay,
so
anyway,
so
we
want
to
see
well,
these
are
all
the
helm
commands.
First
of
all,
so
you
can
come
through
and
you
can
figure
out
all
the
goodies
that
you
can
do
with
helm.
A
There's
a
ton
of
stuff
here:
we've
already
did
at
eg,
I
can
home,
but
we
can
do
a
helm
LS
and
we
can
see
if
we
have
anything
running
and
of
course
we
don't.
So
we
just
get
a
new
line.
We
also
have
a
deploy
folder
with
a
deploy.
Emily,
you
can
use
skipping
home,
ok,
cool,
so
Aarthi
says:
there's
a
deploy
folder
here!
Ok
I
should
have
done
this
so
yeah,
here's
our
service
controller
and
then
here's
our
UML,
ok
cool.
A
So
if
you
wanted
to
just
come
through
and
and
this
is
all
the
Yambol
that
we're
looking
up
in
the
helm-
chart
here's
our
cluster
role
binding,
we
talked
about
I
bet.
If
we
look
in
here,
we
can
find
our
yeah
here's
our
scale.
The
object,
CID
I,
better
deployments
in
here
somewhere,
yep,
here's
our
deployment-
and
you
can
see
here-
we
Kate
a
core
Kate,
elitist,
so
yeah.
This
is
a
little
bit
more
literal
and
a
little
bit
easier
to
read.
A
So
if
you
want
to
come
check
out
the
yeah
Mel
come
look
here
and
this
a
fin
helm
is
just
building
this
for
us
from
all
of
those
other
you
animal
files
and
just
glues
it
all
together
and
gets
this
okay
cool.
So
anyway
we
can
run
our
helm,
install
command.
No
sorry,
let's
grab
this
command
back
here,
helm
install
so
we'll
paste
this
in
and
you
will
run
it
and
hope
for
the
best.
A
There
goes
okay,
so
here
you
can
see
all
those
the
files
we
looked
at
in
the
templates
directory
in
the
chart
and
the
values
yeah
memo
file
that
we
just
looked
at
in
the
deployed
directory
all
match
here.
Here's
that
cluster
role
binding
again,
here's
that
deployment
again
we
looked
at
the
service
a
little
bit.
Here's
our
API
service
we
looked
at
and
there
should
be
a
CID
in
here
somewhere
duty
duty
to
custom
metrics
where's,
our
CRD
just
see.
A
A
We
can
see
here
everything
that's
running
in
our
kubernetes
cluster,
mostly
so
here
we
have
the
namespace
kata
and
it's
running
a
pod
and
that
status
is
running
here.
We
have
a
namespace
called
kata
edge
and
this
is
a
service,
and
here
you
can
see
we
have
dududududu
a
kata
deployment
and
a
kata
replica
set.
So
I
bet.
If
we
do
a
can
get
pio,
namespace,
kata
yep
sure
enough.
There
is
the
kata
service.
A
So
this
is
the
same
program
that
I
just
compiled
locally
Marcel
earlier
I,
explain
a
little
bit
more
to
see
ideas,
I'm,
not
sure.
If
you're,
a
my
sequel
person
or
not
a
relational
database
person
or
not,
but
a
CID
is
effectively
a
schema
for
an
arbitrary
idea.
So
earlier
I
used
some
examples
like
it's
yeah.
It
stands
for
customer
resource
definition,
but
I
use
some
examples
like
it
could
be
like
animals
and
you
could
have
you
know
what
color
is
it
does
it
have
scales?
Does
it
have
fins?
Does
it
have
eyeballs?
A
The
same
way
you
can,
with
like
a
relational
database,
schema
okay,
cool.
So
anyway,
let's
grab
logs
on
this
one,
so
K
logs
in
kata,
the
name
of
our
pod,
f,
okay
and
earlier
when
we
ran
it
locally.
Let's
open
up
a
new
tab
here,
so
go
to
back
to
my
go
path:
go
source,
github.com,
kata-kata
list
change
directory
to
deploy
nope.
A
It
was
dist
there
you
go
so
we
run
this
locally.
You
can
see
here
we
get
an
info
and
we
get
log
level
in
our
k2
version
and
then
it
effectively
breaks
because
we
don't
have
some
environmental
variables
defined.
We
don't
want
to
run
this
thing
locally,
but
what
I
want
to
show
you
is
how
this
is
going
to
match
what
we
see
here.
A
Right
there,
so
you
can
see
this.
It's
the
same
software
running
so
AB
demo
says
thinking
about
CRT
like
a
plugin
design,
any
animal
definition
and
any
resource
didn't
implement
the
controller
for
it.
Marcelle
says
it's
a
way
to
extend
the
KC
API.
That
way
you
can
make
gates,
have
custom
resources
without
having
to
code
into
the
kate's
codebase
and
then
there's
like
looks
like
christian
atoms
and
stuff
so
yeah.
However,
you
want
to
think
about
it.
A
It's
it's
literally
just
an
arbitrary
idea
and
then
usually
they're
used
in
controller
patterns,
and
you
know
I've
got
some
talks
on
that.
I
can
point
you
to
or
if
you
want
to
talk
to
the
folks
in
the
chat
about
it,
go
for
it,
but
basically
all
the
CR
D
does
is
define
what
you
want,
and
the
controller
just
loops
over
and
over
again
and
tries
to
make
that
happen.
A
Doing
like
an
audit
in
an
event:
loop,
ok,
so
anyway,
I'm
off
on
a
tangent,
we
got
Keita
running,
we
got
logs
for
it.
So
now
let's
go
here:
let's
go
back
up
a
directory
and
let's
see
what's
next
in
our
tutorial
here,
Keita
is
installed
good,
which
means
all
those
scalars
that
we
looked
at.
All
of
that
logic
is
now
running
kubernetes
in
a
loop
over
and
over
again.
So
the
next
step
here
we're
done
installing
Keita,
so
we've
got
kubernetes
installed.
We
install
Keita
on
the
cluster.
A
Next,
we
want
to
clone
this
repository
here,
so
I'm
gonna
use
that
clone
command
I
have
clone
doo-doo-doo,
and
now
we
just
want
to
go
to
this
directory
here,
which
is
so
basically
clone
is
just
smart
enough
to
hit
github
and
say:
oh,
it's
a
go
program.
I
should
put
it
in
the
go
pass,
that's
all
it
really
does
and
you
don't
need
to
put
you
can
just
put
github
and
it's
like
smart,
so
you
can
actually
do
things
like
clone
kubernetes
and
it'll
go
find
kubernetes
for
you
anyway.
A
If
we
go
to
this
directory
to
go
in
here,
you
can
see.
We
are
now
in
this
repository
here
on
my
local
computer
and
then
it
said
yeah,
of
course,
just
CD
into
that
repository.
So
now
it
says
we
want
to
install
rabbit
in
queue
so
another
bit
of
ahem
rant
here
if
we
want
to
install
rabbit
in
queue
the
bit
of
the
part
of
this
command
that
you
really
need
to
understand.
Is
this
part
here,
so
this
has
stable,
slash,
rabbit
in
queue.
That's
that's
the!
A
What
what
are
we
installing
and
then
we
override
some
member
that
template
file?
There
are
some
variables
in
there.
We
can
override
that
using
this
set
flag,
so
we're
just
setting
user
and
password
to
whatever
we
want
it
to
be,
we'll
just
keep
it
what
it
is
for
now,
because
we're
just
doing
a
demo,
but
this
part,
where
is
this
coming
from
so
go
to
github.com?
Actually,
let's
do
home,
charts.
A
A
Dudududu
l
m
n,
o
p
q
r
s
rabbitmq
right
there,
and
then
this
is
just
gonna,
be
another
helm
chart
just
like
the
one
we
looked
at
earlier
with
the
values
IMO
with
templates,
except
for
all
this
is
doing.
Is
it's
just
bringing
up
a
regular
old
rabbit
in
queue
message
broker
now
it's
opinionated
in
the
sense
that
like
if
you
want
to
set
up
your
message
broker
like
say
you
want
to
have
highly
available
rabbit.
A
This
may
or
may
not
do
that
for
you,
because
this
is
gonna,
make
a
lot
of
assumptions
about
how
you
want
to
install
this.
But
if
you
happen
to
want
whatever
the
the
folks
who
work
on
the
charts
team
prescribed
as
a
good
or
a
smart
idea,
you
can
go
ahead
and
you
can
helm
install
this
right
here.
So
that's!
What
we're
gonna
do
is
we're
just
gonna
trust
this
for
the
sake
of
the
demo
and
we're
going
to
do
a
helm,
install
rabbit
HQ
just
to
get
it
running.
A
So,
let's
grab
our
command
again
and
we'll
copy
this,
and
we
will
install
it
so
run
that
okay,
you
can
see
here
just
like
last
time
we
got
a
list
of
everything
it
installed
installed
a
service.
It
installed
a
stateful
set
inside
of
the
deploy.
This
time
we
got
a
pod,
we
even
have
a
secret,
which
is
that's
our
username
and
password.
It
looks
like
we
have
a
config
map
to
configure
rabbit
and
we
even
did
some
our
backi
things
here
at
the
end.
A
So
we
come
in
here
and
let's
grab
these
logs
now,
okay,
logs
mur
F-
and
we
can
see
that
this
is.
This
is
all
Erlang
right,
so
rather
dim
view
is
written
in
Erlang.
So
this
is
actually
erling
now
running
in
kubernetes
and
that's
what's
going
on,
and
so
let's
open
up
a
third
tab
up
here
at
the
top,
let's
zoom
in
a
little
bit
and
let's
do
k,
get
P!
A
Oh,
and
you
can
see
here
RabbitMQ
we
have
one
of
one
ready
it's
in
state
running
and
if
we
want
to
see
the
logs
we
can
see
them
over
here,
we'll
throw
in
some
some
black
space
here
and
we'll
come
down
here
and
we
can
still
do
you
know,
k,
get
deploy
and
everything
else
and
see.
What's
all
run
unit,
our
default,
namespace,
okay,
kate,
is
installed.
A
Curling
problems,
okay,
so
next
we
want
to
do
a
Quebec
tool,
apply
of
what
we
call
a
consumer.
So
if
you've
ever
used
rabbit
and
Q
before
you'll
know
that
there's
this
concept
of
producers
and
consumers
so
we're
gonna
draw
a
picture.
That's
what
and
I
know
I'm
going
long
on
this
episode,
but
whatever
it's
my
last
episode
at
the
hefty
office-
and
this
is
fun
so
deal
with
it.
Okay,
so
here
we
have
a
message
broker
and
we're
gonna
call
this
thing.
Rabbit
and
rabbit
is
fantastic.
It's
a
fantastic
message
queue.
A
A
So
what
would
happen
in
this
example?
And
just
so
you
understand
the
terminology
as
we're
going
through
it
here
is
a
producer,
would
create
a
message
or
create
work
in
our
message
broker
in
this
specific
queue-
and
let's
say
this
Q
is
called
slash.
Hello
and
let's
say
we
have
another
Q
that
we're
gonna
call
slash
goodbye.
A
So
this
producer
might
stick
messages
in
hello
and
this
one
might
stick
messages
in
goodbye
and
the
way
that
rabbit
works
is
a
message
comes
in
and
the
first
thing
rabbit
does
is
it
persists
it
to
disk,
and
this
is
one
of
the
huge
things
that's
important
to
rabbit,
which
means
if
our
program
breaks
so
at
some
point,
if
our
program
broke,
this
person
no
longer
has
the
message
because
they
stuck
it
here.
This
person
has
not
consumed
the
message
yet
and
our
program
just
died
and
all
that
memory
went
to
waste.
A
So
we
lost
that
message
forever.
So
one
of
the
important
things
that
rabbit
does
is
it
persisted
to
disk
until
this
comes
along
and
reads
the
message
and
also
goes
back
in
acts
and
says
yes,
I
have
the
message
and
then
rabbit
unright
sit
from
disk
and
that's
how
we're
the
rabbit
message
key
works
so
that
even
if
your
program
breaks
you're,
always
able
to
recover
your
messages
back
from
disk
because
rabbit
will
not
acknowledge
this
one
until
it's
been
written
here,
so
we
just
get
some
really
powerful
guarantees.
A
So
as
long
as
our
producer,
rabbit
in
our
consumer
respects
rabbit
we're
allowed
to
put
as
much
work
into
rabbit
as
we
want
and
if
our
program,
if
rabbit
breaks
or
goes
down,
we
know
we
don't
lose
any
work
because
it's
been
persisted
to
disk
down
here
and
all
the
producer
does
is
create
messages
or
create
work
and
all
the
consumer
does
is
read
messages
and
read,
work
and
that's
how
pretty
much
every
message
broker
works.
Aravindh
driven
system
works
and
that's
the
RabbitMQ
and
why
it's
so
exciting
Kafka
works
in
very
different
ways.
A
So
anyway,
that's
rabbit,
and
that's
one
of
the
reasons
why
rabbits,
my
favorite?
Okay,
that's
my
rabbit
rant.
So
let's
go
back
to
my
screen
or
my
face.
Okay,
so
it
says
we
want
to
deploy
a
consumer.
So
we
go
in
here.
Let's
change
directory
to
go
source,
github
com,
Marcel,
says
kind
of
I
feel
to
be
in
a
job
world
now
go
source,
github,
calm,
Akina
core.
A
What
is
it
sample,
go,
rabbiting
queue
and
then
in
here
we
have
what
was
it
deploy
yeah.
So,
let's
cat
out
this
consumer
and
see
what's
going
on
here,
deploy
consumer
Diana
okay.
So
this
is
what
the
instructions
are
telling
us
to
deploy.
So
we
have
a
deployment
here.
It's
called
the
rabbitmq
consumer
and
again
the
consumers
that
piece-
that's
listening,
it'll
be
reading
from
rabbit,
and
this
is
probably
already
configured
yeah
to
use
our
default
username
and
password
rabbit
in
queue.
A
Thank
you
Jeff
for
clarifying
that,
okay
cool.
So
all
this
is
it's
running
this
program
here
that
it's
called
Jeff,
Hollen,
rabbitmq,
client
and
I
bet.
If
we
go
here,
we
can
see
that
yes,
there's
this
command
here
and
we
have
send
and
receive
or
produce
and
consume
producers
and
consumers.
So
this
would
be
our
producer
and
this
would
be
our
consumer,
so
I
bet
all
this
is
doing.
Is
it's
just
a
simple
program
that
just
talks
to
rabbitmq
and,
of
course
that's
what
precisely?
A
What's
going
on
here
agreed
and
good
feedback
Thanks
thanks
for
letting
me
pick
on
you,
Jeff
I,
appreciate
it
and
thanks
again
for
helping
me
out.
Are
there
today?
Okay,
so
Jeff
wrote
this
really
simple
program
and
all
it
does
is.
It
demonstrates
some
work
for
us
to
use
kata.
So
thank
you
Jeff
for
doing
this,
so
it's
making
our
demo
much
better.
Okay,
so
we're
gonna
run
our
consumer
and
you
can
see
here.
We've
actually
got
a
scale
object
for
it
and
we
have
our
deployments.
So
let's
do
our
look.
Cue
Bechdel
apply.
A
A
So
this
is
what's
actually
running
in
kubernetes
here
I
rabbitmq
consumer.
We
have
a
deployment
name,
here's
our
username
and
password
hard-coded
into
our
deployment
here.
I
would
like
to
see
this
in
a
secret,
but
again
it's
it's
a
sample
project,
we're
kind
of
more
interested
that
it
works
and
we
can
demonstrate
how
Cato
works
and
less
about
actually
keeping
this
thing
as
secure
since
we're
just
doing
a
demo,
we're
not
running
this
in
production
anywhere.
Okay,
so,
let's
see
what's
going
on
here,
okay
get
pods!
You
can
see!
We
have
that
running.
A
So
we
have
our
RabbitMQ
running,
but
we
don't
see
anything
running
for
our
consumer
now
think
about
this
right.
So
the
way
that
this
system
is
designed
with
Keita
is
a
message
comes
in
to
this
piece
of
software
that
I
have
highlighted
here.
The
rabbitmq
message
broker
itself
and
then
Kato
will
Auto
scale
pods
necessary
to
do
what
we
looked
at
here.
To
do
this,
so
Kato
will
create
a
pod
heard
this
one
and
create
a
pod
for
this.
One
it'll
create
a
pod
for
this.
A
One
will
create
all
these
pods
as
you
go
down
and
you
get
more
and
more
consumers
I
know
I'm
getting
sloppy
right,
and
so
anyway,
I
don't
pay
attention
there,
but
as
we're
creating
more
consumers
to
act,
the
work
that
we're
putting
into
Rabbit
we're
gonna
need
to
create
more
pods
along
the
way.
So
we
don't
have
any
work
in
rabbit.
So
do
we
see
any
pods,
we
don't
see
any
pods
yet
so,
let's
generate
some
work
and
it's
we
can
watch
pods
auto-scale.
Can
this
be
scaled
up
from
zero
instead
of
one
Ashish?
A
Can
you
be
a
little
more
descriptive
of
what
you're
referring
to?
Are
you
talking
about?
The
rabbit
in
queue?
Instance?
Are
you
talking
about
the
consumer?
Are
ethey
says
yes
and
yes,
okay,
so
I
I'm
kind
of
confused
what
just
happened,
but
it
looks
like
everybody
figured
it
out:
okay,
so
anyway,
now
we're
gonna.
Do
our
deploy
our
publisher
job
here.
So
let's
go
look
at
our
publisher
job
and
see.
What's
what's
going
on
here,
key
back
to
apply
eff,
deploy
publisher.
A
Do
you
do
and
let's
cat
this
thing
out
list
cat
deploy,
publish
your
job,
clear,
a
screen
again
and
there
we
go
so
it's
a
batch
job,
so
a
batch
job,
just
kind
of
like
runs
like
a
cron
job.
It
just
runs
once
and
it's
gonna
pull
this
other
image
so
that
I'm,
assuming
is
built
from
this
other
go
command
here
and
it's
called
Jeff,
Hollen,
RabbitMQ,
client
and
again
this
thing
it's
gonna
do
a
send
to
the
rabbitmq
server.
A
So
all
this
is
going
to
do
is
just
send
a
bunch
of
messages
and
I
bet.
If
we
go
and
we
look
in
Jeff's
code
here
we
can
see
command
send
send
go.
We
can
see
that
he's
probably
doing
some
exciting
things
here
so
yeah.
It
takes
in
message
count
as
one
of
the
arguments
and
I
bet
here.
In
our
build
file
or
docker
file,
we
can
see
which
arguments
were
defining
or
maybe
it's
an
appointment.
A
Let's
see
if
it's
in
a
batch
job
image
pool
policy,
always
image
I,
don't
see
any
arguments
defined
here
so
somehow
we're
passing
it
and
I
think.
Maybe
this
is
it
here,
yeah
300!
So
these
are
the
arguments
we're
telling
it
to
send
we're
giving
it
the
username
and
password
and
we're
telling
it
to
create
300
messages.
So
we
could
actually
edit
this
300
new
change,
it's
like
a
million
and
it
would
be
pretty
exciting.
It
would
probably
break
our
cluster.
A
A
she
says:
I
think
you
answered,
there
is
no
consumer
running
or
zero
consumers
are
running
expecting
add
more
will
be
as
messages
come
in
yes,
Ashish,
you
are
100%
correct.
So
let's
apply
this
and
let's
watch
our
300
messages.
Go
into
rabbit,
so
cue,
Bechtel
apply
eff,
deploy,
publish
your
job
Dom,
oh
ok!
This
is
where
I
like
things
are
gonna
get
exciting.
A
Deploy
this
we're
gonna,
do
k,
get
P,
Oh
watch
clear
our
screen.
All
right.
This
is
cool,
so
kata
is
automatically
scaling
containers
to
keep
up
with
our
job
that
we're
creating
in
rabbit.
And
if
we
come
here,
you
can
see.
Here's
RabbitMQ
having
a
party
here
is
kata
itself
having
a
party,
and
you
can
actually
watch
these
pods
kind
of
come
and
go
and
for
good
measure.
Let's
open
up
a
fourth
tab,
zoom
in
and
let's
do
a
sweetheart
doc
say:
there's
a
command
here
that
they
say
we
can
run
doo-doo-doo-doo
get
HPA.
A
So
this
is
exciting.
This
is
gonna,
get
our
horizontal
pot
autoscaler,
and
here
you
can
see
how
many
targets
we
have
our
max
pods
and
how
many
replicas.
So
this
is
our
we
can
do
a
watch
here
and
now,
like
I,
would
if
I
was
like
a
system,
admin
I
would
have
all
four
of
these
pulled
up
like
on
my
screen,
and
that
would
be
like
my
command
station
right.
That's
where
I
would
be.
A
Why
would
you
watching
all
these
different
components-
and
you
can
see
here
we're
creating
containers
still
having
a
party
in
rabbit
still
having
a
party
in
kata
and
if
we
just
come
out
of
here,
you
can
see
we
actually
pulled
the
number
of
targets
down,
because
the
number
of
messages
in
rabbit
are
now
shrinking,
they're
not
getting
smaller,
as
these
consumers
act
on
them.
So
that's
kinda.
We
created
an
event.
A
We
we
generated
some
synthetic
work
and
we
watched
the
kubernetes
horizontal
pod
autoscaler
scale
up
and
down
as
we
put
messages
into
our
system
and
as
it
needed
to
complete
some
work.
So
big
takeaways
here,
I'm
gonna
zoom
out
and
do
my
thing
here
at
the
end:
okay,
so
big
takeaways.
Here,
if
you
want
to
write
as
a
scaler,
you
can
and
I'm
assuming
PRS
are
accepted.
It's
open
source
right.
A
If
you
want
to
write
a
scaler
for
whatever
arbitrary
event
you
want,
it
could
be
a
event
broker
that
you
wrote
in
bash
for
your
company
I.
Don't
really
think
it
matters!
If
it's
software
we
can
make
it
do
anything.
We
want
men.
Pod
is
one
a
she's,
just
thinking
about
something,
and
then
you
could.
You
could
build
that
into
the
the
code
base
here
right
now.
You
would
have
to
hard
code
it
in
but
I.
A
It
sounds
like
they're
open
to
the
idea
of
having
some
dynamically
loaded
configuration
system
which
helm
has
one
of
these
there's
a
bunch
of
examples
in
kubernetes
RC.
If
you
want
to
see
an
example
of
like
how
to
do
some
dynamic,
plug-in
e
stuff,
I
can
show
you
some
source
code
that
I
think
you'll
enjoy
based
on
the
code
that
I
looked
at
and
it
looks
like
you
think,
the
same
way
as
a
lot
of
the
other
girlfriends
I
work
with
jeff
says:
HPA
has
a
minimum
of
one
today.
A
A
You
could
use
kata
for
anything
like
that,
but
right
now
it's
gonna
require
a
source
code
change
and
that's
fine,
that's
how
we
started
in
kubernetes,
and
now
we
have
Sierra
DS
the
first
time
we
wanted
to
create
a
new
object
in
kubernetes
we
went
through
and
we
added
it
to
the
standard
library.
Now
we
don't
have
that
because
we
have
C
IDs
and
those
are
dynamically
loaded,
but
we
have
to
start
somewhere.
A
So
that's
where
we
are
today
with
kata,
and
you
can
see
here
if
I
go
back,
you
can
see
these
pods
have
now
have
since
terminated.
So
our
work
has
expired.
Of
those
pods
are
no
longer
needed
and
the
HPA's
is
scaling
back
down
to
zero.
In
fact,
I
bet.
If
I
do
okay
get
P.
Oh
here
I
can't
get
P.
Oh
we're
back
down
to
zero.
We
have
one
padre,
jeff,
says
spoiler
or
that
skills
after
game
of
throne
ends
in
tweet,
spoilers
a
spoiler
scale.
That's
a
great
idea!
A
Jeff
I
said
yeah
Jeff
says
we
should
have
a
game
of
Thrones
spoiler
scaler!
That's
really
hilarious!
Okay,
so
anyway
build
your
scaler
in
PR
is
accepted
hook
it
up
to
the
horizontal
pot,
autoscaler
and
kubernetes
you've.
Seen
one
of
the
service
wait.
Service
objects
did
I
say
that
right,
let's
see
what
we
got
here
so
I
did
cake
it's
here
at
ease,
okay,
good.
Why
can't
I
do
k
@
CR,
DS,
okay,
good,
see
ours,
I.
A
Don't
know
anyway
creates
that
glues
that,
together
with
the
horizontal
pod,
autoscaler
and
we'll
scale
our
pods
for
us
now
remember
this
doesn't
scale
infrastructure.
This
just
scales
your
pods,
but
if
you're
doing
any
sort
of
burst
processing
like
maybe
you
have
this
like
one
batch
job
or
like
I,
know
Holden,
she
was
on
the
call
earlier.
A
Does
a
lot
of
work
with
spark
and
I
was
like
famous
for
these,
like
I'm
gonna
like
run
this
like
one
spark
job
and
it's
this
ephemeral
thing
that
runs
for
a
finite
amount
of
time
and
it
just
hammers
our
system
and
doesn't
bunch
of
like
fancy,
Scala
machine
learning
stuff
that
I
don't
understand-
and
you
know
something
like
Keita-
would
be
able
to
distribute
that
work
a
lot
better
and
we
could
automatically
scale
up
and
down
as
those
jobs
came
into
our
work
he's
and
not
so,
if
you're
doing
big
data
processing,
if
you're
building
a
data
pipeline,
something
like
Keita
might
be
appealing
to
you,
and
if
this
is
something
that
you've
been
wanting
for
a
while,
you
can
now
come
in
and
join
this
community
and
start
working
with
Keita
and
our
friends
at
Red,
Hat
and
Microsoft.
A
A
This
is
our
last
time
in
the
TGI
Caicedo
I
want
to
like
take
one
of
these
sound
things
home
and
like
frame
it
on
the
wall
or
whatever,
but
yeah.
This
is
kind
of
surreal.
Last
time
in
the
two
gik
studio
last
time,
I'll
be
here
at
the
Seattle
office.
I
love
this
studio
so
much.
This
is
where
it
all
started
and
we're
gonna
get
a
new
one
soon
and
it's
gonna
be
really
exciting,
but
thanks
everyone
for
joining
it's
been
great.
This
was
a
fun
episode.
A
I
had
a
lot
of
fun,
I'm,
sorry
I
went
and
did
an
hour
and
forty
minutes
I
usually
try
to
do
an
hour
and
20,
but
we
had
a
lot
to
talk
about
and
I
kind
of
got
derailed
off
on
a
tangent
quite
a
bit
but
yeah
thanks
for
joining.
It
was
great
to
see
everyone
have
a
wonderful
weekend.
We
all
know
what
I'm
gonna
be
doing
this
weekend.
A
So
wish
me
luck
and
I
will
see
everyone
next
week,
I
think
from
home,
maybe
from
Joe's
basement,
maybe
from
my
house
I'm
sure
Ashish
says
one
more
round
of
trivia.
Okay,
you
want
to
do
one
more
round
of
trivia.
Let's
do
so
one
two,
three,
four,
five,
six,
seven,
eight
nine
ten
will
do
one
more
final
round
of
trivia
and
then
we're
going
to
pretty
much
be
out
of
stickers,
so
I
might
be
ordering
some
more
stickers.
If
I
can
find
my
own
template,
okay,
so
Ashish.
What
do
we
want
to
do?
Trivia
on?
A
A
A
Okay,
so
I
saw
this
one
earlier
on
Twitter.
So
what
kubernetes
release
what
kubernetes
release
defines
the
official
pronounciation
of
Quebec
tile?
There's
a
release,
number
that's
out
there
and
it's
in
the
changelog
we're
gonna.
Look
it
up
together,
which
kubernetes
release
defines
the
official
pronunciation
of
cubic
doll,
and
here
is
a
hint.
A
Well,
who's
a
a
there.
We
go,
hey
cut
it
so
who's
a
is
that
a
Shh
I'm
assuming
that's
ashish.
Let's
look
in
chat
madness.
There
is
an
official
pronunciation,
yeah
there's
an
official
pronunciation,
but
that's
like
not
official.
It's
just
gonna
change
log.
That
doesn't
mean
anything.
Nobody
reads:
the
change.
Log
I
read
the
change
log,
but
I.
Don't
think
anybody
else
does
okay.
So
let's
go
back
to
our
chat
and,
let's
see
who
won
so,
who
is
a
a
who
is
a
a
that's
who
won.
A
Okay,
cool,
you
know
what
to
do.
Sumeet
your
information
and
I'll
get
this
stuff
in
the
mail
as
soon
as
possible.
One
two:
three:
four:
five:
six:
seven,
eight
nine
ten
okay
come
find
me
at
cube.
Con
we're
gonna,
have
a
lot
more
stickers
and
fun.
Joe
and
I
are
gonna.
Do
TGI
K
we're
gonna,
miss
the
Seattle
studio,
but
we're
getting
a
new
awesome,
studio,
Joe
and
I
and
Duffy,
and
everyone
else
will
be
the
Ont
G
I
came
from
home
for
a
while.
A
It's
been
a
great
past
two
years
here
at
the
Seattle
help
deal
office,
we
wouldn't
be
where
we
were
today,
if
it
wasn't
for
you
and
everybody
out
there.
So
thank
you
on
behalf
of
hefty
Oh
on
behalf
of
VMware.
We
couldn't
do
that
without
you.
Click
that
subscribe
button
right
here
on
YouTube
support
us,
and
we
will
see
you
all
next
week
have
a
great
weekend.
Everyone
bye.