►
Description
KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks. In this short talk you will learn the following: (i) Why KEDA? (ii) What is KEDA? (iii) Virtual Kubelets (iv) Helm Charts (v) Demo.
A
Let
us
start
okay,
so
today
we're
gonna
talk
about
kedah.
I
asked
everybody.
This
is
a
conference
right
conference
of
kubernetes,
we'll
be
only
talking
about
kubernetes,
mostly
right,
so
I'll
not
get
into
what
is
kubernetes
and
all
that
stuff,
we'll
just
move
forward.
So
I'll
just
tell
you
what
I'm
from
I'm.
Basically
a
microsoft
mvp
and
I
have
a
experience
of
around
12
to
14
years
and
then
in
cloud
it's
around
78
years.
A
I've
been
working
for
a
very
long
time
and
I
have
worked
from
say,
vm
to
a
normal
compute
engine
and
then
to
kubernetes
okay.
So
there
are
several
layers
and
kubernetes.
We
came
into
that.
So
we
had
a
very
huge
project
which,
where
where
we
have
run
some
machine
learning
workload
where
we
to
ocr
some
of
the
documents
around
phi
cross,
so
that
was.
A
A
What
we
did
was
we
started
working
on
kubernetes,
then
that's
the
first
time
we
started
working
on
kubernetes
and
we
thought
okay,
let's
go
and
check
out
what
we
can
do
with
kubernetes
and
how
we
can
use
the
serverless
and
also
the
serverless
scaling
and
also
have
more
control
over
what
compute
engine
it
should
run
on
because
in
case
of
serverless
right
you
cannot
choose
a
compute,
meaning
the
gpus
and
cpus
and
the
ram.
A
It
is
some
pre-configured
information
that
comes
with
that
compute
and
then
we
only
have
to
use
whatever
that
serverless
gives
you.
So
the
cada
gives
you
the
flexibility.
To
I
mean
kubernetes
gives
you
the
flexibility
to
run
those
compute
engines
and
also
cada
helps
you
out
helps
acts
as
a
package
manager
for
you,
and
then
virtual
cubelets
will
help
you
to
pay
only
for
what
to
use.
A
So
these
are
the
three
things
which
you,
which
we
actually
figured
out
when
we
are
trying
to
run
a
huge
machine
running
workload
which,
when
you
calculated
it
was
coming
around
25
lakhs
and
then,
when
you
started
used
to
cada
and
elm
and
virtual
cubelets,
it
came
around
somewhere
around
three
and
a
half
to
four
lakhs,
where
one
lakh
went
on
writing
logs.
Okay,
so
the
the
logs.
A
By
the
kubernetes
took
a
lot
more
than
the
actual
thing,
it
is
because
of.
A
We
didn't,
we
did
disable,
we
did
not
disable
it,
so
that
is
why
it
is
there,
but
still
yeah.
That's
the
something
we
just
learned
after
spending
a
lag.
Okay.
A
A
Yeah,
all
right
so
so
we
agenda
is
basically
a
simple
agenda.
We
have
we'll
talk
about
what
is
auto
spelling.
We
talk
about
what
is
hpa,
which
is
horizontal
parts
autoscaler
and
why
we
choose
square
and
what
is
scada
and
also
the
actual
virtual
cubelets
and
basically
they'll
charge
and
a
demo
okay.
So.
B
A
Scale,
so
so
what
basically
auto
scalar
right.
So
what
why
we
need
another
scope
is
cloud
by
itself.
Is
an
auto
scaling
entity
right,
so
we
go
to
cloud
because
it
can
scale
a
huge
amount.
Where
we
can,
we
will
be
able
to
stop
us.
We
cannot
even
stop
ourselves.
Sometimes
the
scale
is
so
high.
We
a
small
mistake.
We
make
screws
up
the
whole
building.
That
is
the
one.
A
Of
using
autoscaler,
what
is
autoscaler
is
nothing
to
ability
to
add
a
computing
power
to
the
instance
based
on
the
demand.
So
that's
a
huge
thing
right.
So
in
initially,
when
we
started
our
career
we
used
to
have
we
have
to
send
an
email
whenever
the
project
comes
up,
saying
that
you
know
what
we
need
this,
this
gb
of
ram
say
4,
gb
of
ram
and
say
20
gb
of
artists.
That's
the
server
we
used
to
use
and
then
it
will
take
at
least
for
them
to
procure
a
long
time.
A
Once
our
clients
started
using
the
app,
then
we
need
to
actually
make
sure
that
we
have
to
buy
more
compute
and
add
more
cpus
and
ramps
to
the
already
existing
server
racks.
So
it
used
to
be
a
problem
because
it
will
always
be
a
problem
where
we
think
of
giving
them
more
respect
and
the
project
might
not
go
that
much
or
we
might
end
up
the
other
way
where
we'll
give
them
enough
spec.
A
But
still
the
project
will
be
more
than
what
we
thought
about,
or
some
bugs
bugs
in
the
code
that
is
actually
taking
more
ramps
and
gpus.
That
also
happens,
so
the
auto
scaler
is
a
huge
gift
for
us
from
the
cloud
where
it
based
on
the
needed.
Compute
power
is
increased
and
it's
it's
a
basically
smooth
angling
right.
You
will
not
even
say
see
the
actual
compute
is
being
added
to
the
system.
There
is
zero
configuration
we
need
to
do
whenever
there
is
a
scaling
needs
or
the
I.
A
The
traffic
is
very
high
okay,
so
this
is
the
hpa
we
I
wanna
talk
about,
so
this
kubernetes
does
say:
auto
scaling
very,
very
good
and
very
very
well,
and
it
also
does
every
30
seconds.
There
is
some
matrix
api
that
goes
in
and
then
the
replica
account
can
be
increased,
the
parts
can
be
increased,
the
number
of
nodes
can
be
increased
and.
A
A
Based
on
the
say,
number
of
request
or
number
of
say
cube
that
they
have
added
to
the
system
there
is,
you
cannot
do
it
with
hp,
meaning
once
the
cpu
or
the
ram
reaches
certain
level.
A
The
node
actually
increases
to
say:
may
I
I
may
add
one
or
more
nodes
to
the
kubernetes
it
will.
It
will
not
be
based
on
the
actual
load
that
is
coming
in
meaning
say,
say
I
am
getting
a
10
000
request
and
I
wanna
be
very
forcey
and
then
push
that
if
one
once
I
get
ten
thousand
requests,
I
wanna
deploy
say
a
hundred
part
hundred
parts
before
even
it's
getting
to
ten
thousand
or
twenty
thousand.
A
Only
when
the
actual
kubernetes
nodes
reaches
that
sixty
percent
say
eighty
percent,
then
it
will
scale
to
any
other
matrix
that
is
based
on
whatever
we
have
configured
that
is
configurable,
and
so
we
will
reduce
that
to
sixty
so
that
we
actually
see
it
or
once
it
reaches
twenty
000
it
might
have
become
80.
Then
we'll
go
to
one
more
part,
so
it
is
based
not
based
on
the
load
that
comes
it's
based
on
the
cpu
and
the
metrics
that
is
being
used.
So
that's
the
problem
with
hp.
A
How?
How
is
that
problem
being
solved
by
cada
is
k,
tax
uses
hpa
and
make
sure
that
it
counts.
The
events
and
then
scale
uses
the
hpa
to
increase
the
number
of
pods.
So
basically,
keda
and
hpa
works
together.
The
metrics
all
is
it's
basic.
They
need
to
check
the
there
is.
The
hpa
will
be
added
to
the
kdas
parts.
A
The
kdi
by
itself
is
one
part
which
which
acts
as
checking
the
events,
and
then
the
d1
will
be
added
to
the
hpa,
so
the
hp
will
probably
increase
the
parts
based
on
the
load
that
comes
in
okay.
So
if
you
see
the
diagram
here
so
when
you,
whenever
there
is
a
matrix
right,
so
there
is
a
cada
takes
care
of
the
matrix
using
prometheus
or
any
other
namespace.
A
Sorry
on
any
other
parts
where
it
whenever
there
is
increase
in
the
load
that
comes
in
based
on
number
of
requests,
based
on
the
actual
huge
queue.
That's
q
data
that
comes
in
or
in
kafka
or
in
any
other
platform.
There
are
multiple
keratos
are
scalars,
that's
supported.
There
are
any
number
of
kelvin
that
produces
something
like
azure
function:
scalar
apache,
scalar
service,
scalar
http
request.
So
there
are
multiple
scales
that
data
supports
once
that
scalar
once
that
scalar
knows
for
that
particular
scalar.
A
A
B
A
Okay,
so
so
so
any
any
system
that
needs
a
runtime
scaling
or
even
driven
scaling,
you
need
to
use
scada
and
the
cada
will
take
care
of.
The
data
will
take
care
of
the
actual
parts
that
come
up.
Okay,
yeah,
so
cada
is
basically
even
driven,
auto
scaler,
where
the
hpa
or
the
horizontal
part
scaler
cannot
do
because
horizontal
part
scaler
takes
care
only
in
with
respect
to
the
matrix
of
cpu
and
ram.
A
A
A
So
how
it
works
is
this
cubelet
is
another
plugable
architecture
which
is
implemented
using
cubelet
to
connect
to
kubernetes
for.
A
It
also
uses
the
hpa
and
it
takes
takes
care
of
increasing
the
nodes.
So
virtual
keyboard
is
nothing,
but
there
will
be
an
abstract
layer
over
the
nodes
where
those
nodes
are
only
run
whenever
there
is
a
need
for
it.
Okay,
so
so,
when
you,
when
you
take
kubernetes
right,
you
would
have
seen
in
kubernetes.
You
need
to
have
at
least
one
master
node,
and
then
you
can
add
nodes
to
it.
A
Based
on
based
on
your
needs,
you
can,
you
can
just
keep
one
node
and
then
based
on
auto
scale,
you
can
actually
add
nodes.
Okay,
the
problem
with
nodes.
Is
you
the
cpu
matrix?
It
will
come
down
based
on
some
cpu
metrics,
only
right
so,
and
then
you
have
to
pay
for
that
corresponding
time.
The
node
was
up,
but
in
case
of
virtual
cube
rate,
you
don't
have
to
pay
for
the
node.
You
have
to
pay
for
the
amount
of.
D
Hey
guys,
the
speakers
facing
some
issues
in
this
network
so
we'll
be
resuming
shortly
just
stay
in
there.
D
And
if
you
have
any
questions
on
so
whatever
has
been
covered
so
far,
please
feel
free
to
post
it
on
the
chat.
We'll
definitely
take
it
up.
A
D
D
A
It's
fine.
I
didn't
lose
okay,
okay,
so
I'll.
Just
repeat
what
virtual
cubelets
are
and
virtual
cube
is
nothing
but
a
node.
You,
okay,
so
thanks
again,
so
virtual
cube
plate
is
nothing
okay.
For
example,
I
have
a
component
and
then
in
kubernetes
I
am
scaling
based
on
my
matrix,
cpu
matrix
and
it's
going
to.
If
it
is
80,
it
will
I'll
be
adding.
A
B
A
Paying
for
what
one
node
one
r
and
say
four
gb
ram
right,
but
with
virtual
cube
right,
if
I'm
going
to
use,
set
a
scale
in
virtual
cubelet-
and
I
have
10
parts
say
every
part
takes
say
0.1
so
I'll
be
paying
for
not
point
one
point
zero
one.
I
will
be
paying
for
only
point
one
nodes
and
I'm
say
using
it's
a
point.
One
memory
of
one
gb
of
my
ram,
so
I'll
be
paying
only
for
1gb
of
ram.
A
So
it
is
like
0.1
com,
cpu
and
1gb
of
ram,
so
I'll
be
paying
say
if
this
node
costs
100
bucks
I'll,
be
paying
only
10
bucks.
So
it
is
like
huge,
save
cost
saving
right,
because,
most
of
the
time,
when
I
scale
multiple
nodes,
you
will
not
be
using
the
complete
node,
you'll,
be
using
only
say
60
or
40
percent
of
the
node
and
others
might
be
free,
based
on
at
least
the
last
node
will
be
free.
A
A
Okay,
so
we'll
see
how
virtual
cubelet
works,
basically
virtual
cubelet
sits
next
to
the
node.
It
will
be
an
abstract
layer
of
node
and
it
has
its
own
containers
operating
system
parts.
Basically,
one
more
deployment
attack.
You
do.
That
is
almost
like
a
node
which
sits
on
the
master
node
and
it
will
call
the
whenever
it
needs
a
node
underlying
right
so
underlying
in
azure.
I
don't
know
about
other
cloud
cloud
friends.
Basically,
I
mean
underlying
virtual
kubernetes,
basically
aci,
it's
azure
container
instance.
It's
basically
serverless.
A
I
mean
it's
hybrid
of
serverless
and
the
cooperatives,
without
even
even
having
a
kubernetes
okay.
So
it's
basically,
I
can
tell
you
the
acidity
serverless
kubernetes
virtual
cubelet
will
connect
to
an
aci
and
starts
pulling
those
nodes
and
you'll
be
paying
only
for
the
whatever,
whatever
you
actually
used.
Okay,
so
we
so
what.
D
A
Thought
is
so
until
until
now
we
saw
what
is
scada
right
is
basically
what
is
for
something
to
scale.
Your
recuperator
is
based
on
the
workload
meaning
event
rather
than
the
matrix,
cpu
or
ram
okay,
and
then
virtual
kubernetes
is
nothing.
But
you
know
it's
a
basically
a
cubelet
implements
the
container
where
you
will
not
be
using
the
complete
node
or
you
will
not
be
paying
for
the
complete
node.
C
C
C
A
It-
and
this
is
l
chart
headset,
is
nothing
but
a
package
manager
for
kubernetes.
It's
basically
on
terms
of
infrastructure
as
code
right.
Whenever
you
want
to
write,
you
have
your
infrastructure
as
a
code.
Helm
is
the
best
package
best
way
to
work
in
kubernetes.
Most
of
the
people
will
be
knowing
about
helm
and
all
charts
and
other
stuff
I'll
I'll,
just
touch
it
touch,
base
tl
and
also
talk
about
cada,
also
virtual
cubelet,
so
you'll
have
an
understanding
of
what
it's
all
about.
Okay,
fine.
D
A
A
So
this
is
all
you
select
some
sponsorship
and
all
that
group
resource
group
normal
things
blah
blah
stuff.
You
need
to
do.
Okay,
so
main
concept
here
is
the
node
pools.
So
whenever
you
come
to
node
pools
right,
you
need
to
enable
the
virtual
nodes.
If
you
don't
enable
this
and
create
a
kubernetes,
you
need
to
run
multiple
clies
to
me
to
see
the.
D
Guys
we'll
have
the
host
joining
us
shortly.
I
think
there
are
some
issues
again,
so
just
stay
with
us.
A
C
C
A
So
I'll
this
is
the
yaml
file
for
abbreviating
purpose.
I'm
going
to
show
the
scaled
object
here
and
then
this
is
just
leave
it
all
alone,
so
it's
basically
a
service
deployment
and
a
server
secret
and
a
skilled
object,
which
you
already
know
about
the
helm
charts.
The
main
point
here
you
have
to
know
here
is
the
node
selector.
Okay,
so
you
need
to
make
sure
that
you
select
a
node
that
is
of
virtual
cubelet
to
make
sure
to
make
sure
that
it
runs
in
the
azure
functions,
meaning
the
I
have.
A
A
A
A
Okay,
so
one
is
the
normal
master
node
that
will
that
you
have
to
any
which
was
pay
for
it
for
default,
and
this
virtual
node
aci
will
not
be
a
will.
Not
it's
basically
an
abstract
layer.
It
just
shows
you,
there
is
a
node,
but
you
will
not
be
paying
even
a
single
penny.
You
will
be,
it
will
be
just
sitting
there
only
when,
once
you
run
a
workload,
it
will
work.
I
mean
it
will
actually
start
spinning
the
okay.
Let
me
go
and.
C
A
C
A
A
So
the
virtual
keyboard
is
still
there.
It's
based
on
linux,
vm,
okay,
you
can
even
create
a
windows,
virtual
cubelet
and
then
run
the
same
kubernetes,
which
you
have
already
run.
So
those
are
the
flexibility
you
have
where
you
can
handle
one
app
engine
or
one
engine,
that's
basically
from
the
kubernetes.
So
that's
the
good
one
to
go.
So
that's
all
from
my
end,
the
demo
is
done.
A
So
it's
very
simple
demo,
meaning
all
you
need
to
know
understand
from
this
is
keda
is
a
event
driven
architecture
platform
to
increase
your
compute,
based
on
the
number
of
that
even
comes
in
because
the
horizontal
part
scale
pod
scaler,
does
not
scale
anything
based
on
any
other
events.
It
will
based
on
the
matrix.
The
kera
helps
us
to
scale
it
so
that
hp.
We
can
use
the
power
of
hpa
to
scale
our
work
needs.
Okay,
yeah.
I
think
that's
it
from
my
end.
C
A
C
A
So
this
is
my
book:
it's
right
now
available
in
amazon.
So
when
I
wanted
to
move
from
my
senior
developer
to
architect
right,
I
went
and
asked
multiple
people
there
was,
but
there
was
not
one
place
where
people
told
me.
Okay,
these
are
things
you
need
to
do
become
an
architect,
and
I
did
lots
of
googling
to
understand
to
spoke
to
many
people,
but
there
wasn't
one
place
where
it
is.
Somebody
some
people
told
go
and
read
solid
principles.
Some
said
and
learn
more
about
the
design.
B
A
Some
people
said
you
have
to
know
more
about
new
new
technology
that
comes
in,
but
there
are
certain
things
which
we
need
to
do
before:
even
learning
new
technology
or
understanding
those
things
and
it's
basically
30
to
40
technical.
Another
60
is
all
communication,
skill,
positioning
or
positioning
your
regime.
How
to
talk
to
your
stakeholders,
how
to
get
convincing
your
stakeholders
to
implement
new
technologies.
So
those
are
the
main
things
rather
than
technical,
because
we
got
after
six
years.
A
You
almost
get
very
sound
with
technical
things,
but
you
don't
know
which,
how
to
make
it
more
sound
understand
the
basics.
What
are
the
basics
you
should
know.
So
these
are
the
things
that
comes
up
came
up
for
me
to
become
an
architect.
So
I
thought,
okay,
I
was
actually
doing
I.
I
started
mentoring,
multiple
people
to
become
an
architect,
and
then
somebody
said
why
can't
you
take
a
seminar
out
of
it.
So
I
did
a
seminar,
some
60
members
came
and
then
some
some.
Then
they
asked.
A
So
is
it
okay?
If
you
can
write
a
book,
then
I
realize
okay,
if
you
write
a
book,
more
people
can
learn
from
it.
So
that
is
the
reason
I
just
wrote
this
book.
I
don't
get
much
out
of
this
book,
probably
some
80
bucks
or
something
out
of
the
600.
You
pay
all
this
things
goes
otherwise,
but
there
is
one
seller
who
sells
it
for
500.
That
is
okay.
You
can
check
it
out.
If
you
are
interested,
you
can
check
out
that
book.
A
It's
pretty
nice
make
sure
that
you
give
me
a
you
can
call.
You
can
call
me
you
can
follow
me
in
the
twitter,
it's
karthik3030
and
give
me
a
feedback.
If
you
have
any
details
to
become
an
architect
on
how
to
become
an
architect.
Please
do
contact
me,
I'm
in
linkedin
now,
so
you
just
go
and
search
for
karthik
vk.
You
should
be
able
to
see
me.
I
guess
it
will
be
the
first
name.
A
I
guess
yeah
still
minus
a
first
name
yeah,
it's
in
linkedin,
so
you
can
just
check
out
my
linkedin
profile,
anything
anything!
You
want
any
any
details.
You
want
to
become
an
architect
or
you're
struggling
to
become
an
architect
or
understanding
how
to
become
outside.
Please
do
being
me
bring
ping
me
I'll,
be
happy
to
help
you,
okay,
thank
you
guys
for
any
question.
I
can
answer.
D
Hey
thanks
a
lot.
Cathy
again,
I
think
that
was
a
really
insightful
session
and
we
have
a
couple
of
questions
that
we
can
start
with.
So
the
first
one
is
how
is
virtual
cubelet
implemented
in
non-cloud,
kubernetes,
environment.
A
Okay,
so,
but
it's
a
very
nice
question
so
which
basically,
you
need
to
run
some
cube.
Ctl
commands.
A
So
you
can
run
this
qtl
commands
to
me
to
get
get
your
virtual
cubelets
installed
in
your
kubernetes
yeah.
So
these
are
the
commands.
A
It
will
so
that
I
told
you
right
whenever
you
create
a
cloud
you
can
either
select,
select
enable
virtual
cue
blood
or
install
it
later.
So
that's
that's
the
same
thing.
You
have
to
do
it
in
on
prep,
but
I'm
not
sure
how
on
prem
will
be
that
much
disadvantageable,
because
any
which
ways
you
have
already
bought
the.
C
D
A
D
Okay,
awesome
yeah.
I
think
we
have
covered
all
the
questions
that
we
had
and
thank
you
guys
for
joining
and
thanks
a
lot
katherine
for
this
awesome
session.
For
any
further
questions
that
you
wanted
to
ask
again,
you
can
ping
on
cnc
of
slack
channel.
A
D
D
B
Hey
all
so,
I'm
hoping
that
you
have
enjoyed
this
session
by
karthik,
and
so
then
the
next
session
is
going
to
be
on
understanding.
The
cube
scheduler
simulator
by
pravr,
agarwal
and
raver
is
a
devops
engineer
with
six
years
of
experience
in
cloud
automation,.