►
From YouTube: 2020-01-23 KEDA Standup
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yes,
I
remembered,
let
me
record
this
will
go
through
if
everyone
just
wants
it,
you
can
just
quickly
say
your
name.
The
company
worked
for,
if
you
want
to,
and
then
for
my
own
sake,
if
you
joined
before
which
many
of
you
I
actually
recognized
in
looking
at
this
list
from
yesterday
after
we
do
that
we'll
do
a
quick
kind
of
roundtable
of
updates
for
folks
who
had
action
items
who
were
working
on
things
with
kada,
and
then
we
can
go
into
the
proposed
agenda.
A
It
looks
like
comms
added
a
few
good
items
here,
but
I
was
just
looking
at
a
little
bit
earlier.
Look
at
some
of
the
new
PRS
and
issues
that
have
come
in
I
was
also
just
digging
through
an
issue
around
config
map
that
I
didn't
quite
have
time
to
digest
before
I
joined.
This
call
so
we'll
go
with
that
and
for
anyone
who
potentially
wants
to
add
additional
items
to
the
agenda
that
aren't
here.
This
is
a
great
time
if
you
even
just
want
to
use
that
link
I
pasted
it
in
the
chat
window.
A
B
A
C
D
A
E
A
You
back
on
the
call
sweet,
great
okay,
so
maybe
the
first
thing
we'll
do
then
thanks
everyone
for
joining
for
any
updates.
Maybe
we'll
go
on
the
list
of
attendees,
so
there
were
a
few
action
items
I
even
kind
of
briefly
reminded
there
we'll
give
people
a
chance
to
just
give
quick
updates
before
we
get
to
the
agenda
so
Dan,
do
you
have
any
updates
on
the
Postgres
stuff,
any
questions
or
conversations
or
just
a
quick
status,
update,
yeah.
F
I
can
start
I
just
do
a
quick
status
update,
so
I've
gotten
I
got
the
basic
Postgres
scalar
merged
I'm
gonna
also
be
working
on
my
sequel
one.
This
week,
I've
also
got
the
Postgres
scalar
working
with
Apache
airflow,
so
hopefully,
next
week,
I'll
be
able
to
demo
that
for
anyone
who
wasn't
here
last
week,
I
am
so
my
primary
job
is
working
on
with
kubernetes
integrations
for
Apache
airflow
and
we're
using
Keita
as
a
way
of
auto-scaling
celery
workers
and
so
yeah
I
guess.
A
A
So
the
challenge
today,
pro
and
con,
is
that
we
kind
of
made
the
design
choice
initially
with
kada
that
we
were
going
to
use
the
HPA
for
everything
that
we
could
and
only
use
Keita
for
the
stuff
that
HPA
couldn't
do,
and
so
all
Keita
would
be
today
is
zero
to
one
and
it's
always
zero
to
one.
It's
never
zero
to
two
or
zero
to
four
today
and
then
once
it
creates
the
initial
one.
It
creates
an
auto
scale,
rule
for
the
HPA
and
then
it's
really
up
to
the
HPA
to
scale.
A
So
we
have
limited
control
on
what
we
could
do
primarily
from
the
0
to
X
phase
from
X
to
beyond.
I
think
the
conversation
that
has
come
up
in
another
conversation
I
had
this
week
around
kata
is
potentially
kata
should
be
flexible
enough
to
integrate
with
an
auto
scaler.
That's
not
the
HPA
so
that
we
get
more
control
and
knobs
over
how
and
run
at
scales.
So.
F
F
I
guess
the
question
I
have
I
guess
specifically
in
the
case
of
the
Postgres
scaler,
where
you
have
a
database
if
I
were
to
set
some
sort
of
like
a
database,
rule
for
like
say,
store,
state
and
store
state
in
the
Postgres
and
when
it's
doing
the
periodic
ping
it
can
just
kind
of
do
whatever
math.
So
you
know,
in
fact,
if
that
row
is
30
seconds
old
and
it's
like
like
up
size,
2,
then
add
2
to
that
and
then
basically
keep
external
state.
Maybe
that.
A
Possible,
like
I'd,
be
curious
to
get
other
people's
thoughts
on
the
call
as
well,
whether
it's
header,
if
you
want
to
you
like
the
proposal
you
kind
of
went
with,
is
the
HPA
is
gonna,
be
doing
some
decisions
based
on
the
metric
it
sees
currently
and
then
the
target
metric
that
you
define
and
you're
kind
of
saying
what?
If
what?
A
If,
when
we
publish
the
metric,
we
kind
of
game
that
metric,
so
that,
if
there's,
if
it's
been
a
long
time
or
like
whatever
there's
some
logic,
that's
like
yeah
hey,
even
though
the
number
of
throws
is
three,
the
three
rows
are
really
old,
so
tell
the
HPA
that
the
number
is
30
so
that
HPA
does
more
work.
It's
possible.
It
just
requires
that
we
kind
of
like
that.
Logic
now
rests
with
the
scalar,
to
kind
of
reverse
engineer
and
in
some
ways
game.
The
system
I'm
kind
of
okay,
either
way
on
that
one.
A
F
Good
yeah
I
mean
I
think
like
cuz
the
situation
I'm,
trying
the
the
situation
I'm
that
I've
received
feedback
on
is
like,
let's
say
so,
we're
the
the
Postgres
query.
We're
using
right
now
is
like
number
of
running
plus
number
of
Q
divided
by
number
of
threads
per
worker.
So
if
you
had
a
situation
where
you
had
like
a
thousand
tasks
that
each
run
for
five
seconds,
you
might
not
want
to
scale
to
like
a
hundred
pods
just
to
do
a
thousand.
F
G
A
bit
yeah
sorry
I
was
just
saying
that
I
think
that
makes
sense
what
Daniel
is
saying
we
can
hack
it
or
I'm
sort
of,
but
I
feel
that
there
are
many
other
scenarios.
We're
having
a
customs
autoscaler
would
help
I
think
you
mentioned
some
of
them
yesterday
when
you
would
have
been
nearly
now
and
I.
Believe
I
think
you
mentioned
that,
but
I
am
NOT.
A
native
also
supports
that
right
that
you
can
get
your
own
custom
outer
space.
Sure.
D
That
would
require
us
to
at
least
from
my
side,
I
figured
that
will
require
me
to
can
understand
very
well
how
HBA
actually
does
the
scaling
and
how
we
can,
what
kind
of
metrics
you
need
to
give
it
to
be
smart
enough
to
have
it
to
scale
to
what
you
want
to
scale
to,
but
I
think
there's
still
value
there,
but
I
said
I.
Think
the
idea
of
integrating
with
other
artists
killers
is
also
valuable,
eventually,
yeah.
A
That's
a
good
one,
so
yeah,
maybe
for
the
sake
of
time,
I
think
the
answer
is,
there
is
definitely
interest.
I.
Think,
there's
there's
even
been
some
consensus
that
your
how
you
were
thinking
about
initially
is
like
hey.
The
scalar
uses
some
logic
to
push
metrics
that
make
sense
for
what
it's
trying
to
do
is
valid,
and
so,
if
you
want
to
create
an
issue
or
even
move
the
discussion
to
github
or
slack
or
wherever
else
or
even
start
some
work
of
what
you're
imagining
or
a
proposal
I
think
that's
a
great
next
step.
Awesome.
G
G
F
E
G
E
F
A
D
D
D
A
G
Just
looking
at
he
says
that
we
would
need
to
have
approval,
we
may
not
want
to
do
it,
but
is
there
a
process
to
even
ask
for
approval
I,
don't
know
at
least
that's
what
he
says.
It's
a
different
matter
that
may
we
may
not
want
to
do
it
any,
and
that's
because
if
it's
this
way,
the
first
line
of
this
can
can
we
go
up
right
to
the
top
yeah
all.
B
B
B
A
And
I'm
fine,
if
we
do
it
2.0
whatever
in
the
next
month
like
it's,
it's
a
version
number
we
don't
need
to
feel
like
it
has
to
be
an
annual
release
like
Windows
or
something
if
it's
gonna
be
more
like
Chrome,
but
that's
good
to
bring
a
problem
and
I
Thomas
just
join
the
slack
group
I
saw
so
I
might
take
him
on
slack
as
well
and
we're
even
on
the
issue,
and
we
can
try
to
sort
out
when
we
need
to
do
that
by
anything
else.
I
made
or
honored.
G
A
B
A
C
So
I
am
still
working
on
the
monitor
of
scaler
I
should
we'll
probably
have
a
P
initial
PR
out
Friday.
Definitely
Monday
I
ran
into
a
little
bit
of
issues
with
the
docks
and
so
I'm
going
to
update
those
I
personally,
like
docks
to
be
very
detailed
and
explicit.
So
I'm
gonna
fix
an
error
and
then
a
couple
steps
that
weren't
mentioned.
A
C
C
And
then
what
else
something
that
I
ran
to
that
was
a
little
frustrating
is,
and
maybe
someone
who
understands
the
kata
codebase
better
knows
something
different
about
this
than
I
do.
But
the
initial
spec
that
Tom
proposed
was
like
multiple
indentations
of
the
amel,
so
it
was
like
metadata
and
then
multiple
sections
under
metadata
and
then
like
a
section
under
each
of
those
sections.
C
C
Yeah
so
I
mean
I
personally
for
me,
I
just
took
the
like
beginning
words
of
each
of
the
sections
and
then
like
appended
prepended
it
and
was
called
it
a
day,
but
something
to
think
about
in
the
future,
because,
for
example,
with
the
azure
monitor
example,
it
would
be
nice
if
there
were
was
more
options
than
just
like
the
metadata,
with
a
flat
key
value
pair
since
there
like
several
different
pieces
to
it.
But
maybe
that's
a
future
problem
that.
A
C
A
Let
me
add:
do
action
items
I'll,
create
an
issue
for
the
nesting
just
to
have
conversation,
and
let
other
folks
just
to
see
where
that
takes
us.
So
we
can
track
that
because
that's
a
fair
ask
and
then
I'll
also
create
an
issue
for
updating.
It
was
the
how
to
build
a
scale
or
readme
that
I
think
on
slack.
You
were
mentioning
had
some.
B
A
E
Yes,
so
we
we
have,
the
operator
have
operate
ready,
so
I
will
start.
Oh,
let's
say
the
release
process
or
the
acceptance
process
to
have
it
listed
on
on
the
operator
hub
page.
But
if
we
are
going
to
released
1.2
next
week,
I
will
probably
release
the
this
version.
So
we
don't
have
like
you
know
two
versions
short
robe.
So
if
we
can,
if
we
can,
since
on
there
on
the
release
next
week,
I
will
probably
start
releasing
this
one
sense.
A
E
Would
like
to
to
have
it
like
in
the
in
the
release
in
the
one
or
two,
it's
very
simple
change:
I,
basically,
just
split
containers
from
one
deployment
to
two
separate
appointments.
It's
just
there
are
no,
no
like
change
in
the
let's
say
in
the
in
the
code
itself
and
I
like
a
little
bit
change
the
structure
of
the
mo
files
to
be
like
more
consistent,
so.
A
A
A
E
E
Okay,
yes,
so
the
main
idea
behind
this
is
basically
it's
about
the
internal
implementation.
So
basically
we
can.
We
can
reference
in
the
scaled
object
any
any
like
any
object.
That
is
capable
of
scaling,
so
we
can
have
like
the
deployment
we
can
have
the
state
for
said
job
and
it
could
be
basically
scaled
by
cannot
very
easily
because
it
implements
the
scale
sub
resource.
So
you
can.
A
Yeah
I
think
that's
a
good
one.
I'll
create
an
issue
product
typing.
The
other
option
to
Ahmed
is
I,
know
Scott
and
Matt
from
now,
VMware
are
more
than
happy
to
when
we
were
at
dinner
with
them.
I
think
he
said
like
will,
will
do
as
much
of
this
as
we
can
with
Keita.
So
we
can
also
lean
on
them
either,
whether
it's
we
have
like,
where
they
do
a
brain
dump
breathe
and
they
kind
of
show
us
what
the
pattern
would
look
like.
I
think,
there's
a
there's
a
few
folks
on
that
side.
G
A
B
Okay,
go
ahead
tom,
so
last
week
we
discussed
to
add
more
logos,
so
I
have
appeared
open
for
that,
so
gala.
That
Sh
would
have
basically
an
extended
community
section.
There's
no
screenshot!
Sorry,
I,
don't
think
there
is
one,
but
basically
it
would
have
a
community
section
of
the
countries
who
are
the
company.
Sorry
that
are
contributing
and
I
also
opened
an
issue
to
see
who
is
actually
using
Cana
and
then
above
the
community
section.
B
We
could
then
also,
let's
date,
customers
gallery,
let's
say
and
so
that
we
see
how
who
is
using
it
and
why
they
are
supporting
us
basically
and
then
the
other
one
is
that
there's
a
refresh
of
the.net
course
Ron
Paul
with
service
bus,
because
I'm
going
to
butcher
his
name
but
Jacob
or
something
contributed
a
from
them
for
it,
so
that,
if
you're
using
it,
you
can
also
now
visually
see
how
Makeda
is
training
the
whole
cube
and
over
there
there's
also
an
action
item
for
you
Jeff,
to
give
me
permission
so
that
I
can
set
up
the
see
I
built
and
also
to
push
to
think
I'll
go
put
up
account
so
that
it's
more
private
accounts
anymore.
B
F
One
question
about
the
customer
list
just
kind
of
seeing
how
you
guys
would
want
to
do
it
like
assuming
everything
works
out
which
I
don't
see
why
it
wouldn't
and
kata
becomes
like
a
scaling
back
end
for
airflow.
Would
you
guys
want
to
just
you
like?
Would
you
want
to
just
use
Apache
airflow
as
a
user,
or
would
you
want
to
try
to
see
what
companies
are
actually
using
the
kata
auto-scaling
with
Apache
airfare,
my.
A
B
E
A
B
A
I'm
glad
you
did
it
we're
kind
of
the
victim
of
a
new
process
and
some
new
SIG's
does
I
give
it
like
the
run
time
sig
and
the
app
cigar
brand
new
and
still
like
we've
been
working
with
the
service
workgroup,
which
hasn't
even
figured
out.
If
it's
going
to
be
in
run
time
or
apps,
and
then
there's
this
whole
new
process.
So
yeah
we'll
just
keep
an
eye
on
poke
it
because
I
don't
want
I,
don't
want
this
to
get
delayed
too
much.
Just
because
there's
some
growing
pains
happening
elsewhere,
but
so
far.
A
A
So
Tom
Tom
gets
credit
here
because
he's
the
one
who
just
pink
they
issue
this
morning
so
appreciate
the
help
with
that
comment
and
I'll
be
there
to
help
out
as
well
right
awesome,
okay,
yeah!
This
was
one.
Maybe
we'll
do
this
one
quickly,
I'm
almost
Thomas,
you
can
do
a
brain
dump.
We
don't
have
to
go
through
every
issue
here.
I
just
want
to
do
a
quick
scan
to
make
sure
there's
no
issues
or
pr's
that
are
concerning
I
saw
one
purse
sorry
go
for
it.
I
saw.
B
E
A
A
It
would
be
good
to
wait
for
the
stuff
Mel's
working
on
if
we
could
with
Azure
monitor,
and
she
mentioned
that
there
might
be
a
APR
Friday,
Monday,
Tuesday
and
I
assume
there'll
be
some
revisions.
So
if
I
was
guessing,
I
would
say
like
early
fed
would
be
1.2,
but
we
could
do
1.2
sooner
if
we're
waiting
on
it,
I
need
to
do
it
sooner.
It.
F
But
by
the
way
I
did
have
one
more
question:
has
there
been
any
discussion
about
using
kind
for
integration
tests?
Oh
yes,
like
a
framework
or
just
like
having
an
integration
test
suite,
because
I
can
see
it
being,
at
least
for
the
Postgres
side.
It'd
be
pretty
easy
to
like
helm,
install
a
Postgres
instance
and
then
just
like
make
sure
that
it's
able
to
connect
I.
D
D
It
was
attempting
to
do
something
similar
Redis,
where
I
just
use
helm,
tensor
s
and
then
run
the
test
for
it,
but
so
far
I've
only
have
tests
for
like
cloud
events
that
are
sort
of
kind
of
easy
to
create
and
just
use
the
values
for
yeah.
But
if
there
is
another
framework
that
works
better
feel
free
to
do
it,
you
can
see
that
this
that
we
have
in
the
test
folder.
Those
are
so
nice.
A
Every
to
do
the
nightly
test
we
have
which
I
it
is
still
failing.
So
maybe
that's
one
question
on
that:
our
honor
there's.
We
know
why
they're
not
in
the
first,
it's
something
about
the
queue.
That's
complaining
and
saying
the
queue
is
not
scaling
down
fast
enough,
but
every
night
we
have
a
cluster.
They
did
spun
up.
We
install
a
bunch
of
stuff
on
it,
install
some
scalars,
so
we
could
continue
to
leverage
that,
but
is
Ahmed
mentioned
if
there's
the
different
framework.
That
might
even
do
this
easier.
G
F
E
F
E
A
F
F
A
A
A
Okay,
yes,
a
man,
mysteries,
the
81,
so
looking
at
pull
requests
here,
there's
a
few
I,
don't
think
I
see
anything
here,
that's
to
some
reading
edits,
which
is
nice
ice.
There
was
someone
on
slack
who
was
saying
that
the
new
Kafka
authentication
options,
weren't
working
for
them,
but
I,
don't
know
if
I
ever
got
an
answer
back
and
then
issue
eyes
just
digging
through
here.
A
The
only
one
I
saw
that
had
a
fair
bit
of
back
and
forth
was
this
config
Matt
one
but
again
I
didn't
I
was
trying
to
parse
it
before
the
call
I
said
Tom.
Oh,
it's
Tom
and
Tom
both
Tom's
on
this
one
I,
don't
know
Tom.
If
you
have
any
thoughts
on
if
this
one's
worth
discussing
or
if
we
should
just
keep
it
to
get
hub
and.
A
C
A
C
F
Now
this
is,
this
is
Kate:
oh
you're,
building,
locally
yeah.
E
C
F
Like
if
you're,
if
you're
offline,
you
could
set
the
image
pool
policy
to
never
and
then,
if
you
do
kind,
load,
docker
image
and
then
the
name
of
it,
it
will
push
from
your
local
docker
image,
docker
registry
into
the
kind
node,
and
then
it
will
load
and
that's
great.
It's
great
for
like
offline
work.
Ok.
C
Cool
yeah
I
also
love.
This
comment
in
the
chat
but
I
looked
at
the
readme
update
PR
and
it's
like
good
for
initial
thing.
I
was
going
to
put
in
more
info
and
I
think
it
would
be
helpful
for
me
to
like
add
this
like.
Oh,
if
you're
using
kubernetes
cluster,
you
either
either
need
to
push
to
your
repo
or
you
need
to
use
what
Daniels
talking
about
yeah.
A
Agree:
yeah,
yeah,
that's
I,
think
the
the
kind
of
thought
is
accurate
and
that
once
it's
in
the
cloud
that
you
would
have
to
push
it,
but
if
you
don't
want
to
have
to
push
something,
then
you
could
use
kinder
or
I
don't
know.
Mini
cube
has
the
same
feature
to
pull
from
the
other
one.
But
that's
that's
great
inside
too,
and
that
would
be
super
nice
to
have
in
that
building.