►
From YouTube: 2020-10-01 KEDA Standup
Description
A
I
always
do
record
to
this
computer,
but
I
assume
recording
to
the
cloud's,
probably
easier.
Who
knows
all
right.
Let
me
go
ahead
and
share
my
window
right
here
and
we'll
get
started.
A
A
Okay,
just
making
sure
I
have
chat
and
everything
up
too
so
folks
start
talking.
We
can
we
can
address
it,
welcome
everyone,
happy,
hacktober
and
also
october,
we'll
we'll
jump
right
into
it.
We've
got
a
few
things
on
the
agenda.
I
added
a
thing.
It
looks
like
tom's
added
a
few
things
and
then
obviously,
we've
got
maybe
I'll
just
pop
it
on
the
the
top
of
the
agenda,
which
is
k
to
20
beta,
which
has
been
out
for
a
bit
now.
A
So
I
don't
know
if
there's
any
learnings
or
pressing
action
items
so
maybe
to
start
with
we'll
just
quickly
go
through
intros,
because
we
do
have
a
good
amount
of
folks
here
and
I
think
I've
met
everyone
before,
but
just
wanted
to
make
sure
as
well.
So
I'll
start.
So
I'm
jeff
I
work
at
microsoft.
A
I
will
introduce
a
few
folks
from
microsoft
as
well
for
the
sake
of
time,
so
we
also
have
su
yoshi,
who
is
on
onwards
team,
and
we
have
honored
who
is
on
onwards
team
and
they
are
at
microsoft
as
well
on
the
serverless
team.
Here,
tom
travis,
you
both
want
to
intro
in
whatever
order,
I
guess
tom.
Maybe
you
go
first
briefly,.
C
Great
travis
yep
travis
spickford,
I'm
with
shutterstock
stock,
photo
company
a
staff
engineer
on
the
cloud
devops
team,
great.
D
D
A
Yeah
there's
just
a
lot
of
it
sounds
like
I
don't
know
some
some
kerfuffle
behind
you
it.
The
audio
is
just
really
poppy,
but
but
we
we
heard
a
brief
intro
sochuva
thanks
for
joining
ritaka.
Do
you
want
to
do
a
brief,
intro,
either
unmute
or
on
chat
either
one's
fine.
A
Sorry
shubham,
it's
still
coming
through
really
poppy.
I
don't
know,
I
don't
know
if
it's
a
device
thing
or
if
it's
a
location
thing,
but
it's
it
sounds
like
I
put
a
stereo
on
full
blast
right
next
to
my
ear
and
and
looking
at
the
camera.
So
I
think
everyone
else
is
having
the
same
experience.
Okay,
so
shoe
bum
ridiculo.
It
looks
like
ridiculous
audio
disconnected
too
so
we
won't
worry
about
that
zibianek.
Last
but
not
least,.
E
Hey
my
name
is
biniak,
I'm
working
at
serverless
team
at
redhead.
Yes,.
A
All
right,
we
will
jump
right
into
it,
then
maybe
we'll
start
with
k
to
o
beta.
I
saw
the
beta
go
out
and
then,
when
we
had
the
checks
or
the
the
stand
up
after
that
it
was
my
son's
birthday.
So
I
missed
the
last
stand
up.
I
guess
anyone
want
to
chat
about
that
from
what
I've
seen
it
sounds
like
it's
going.
Well,
I
don't
know
if
there's
any
big
learnings
or
things
worth
calling
out
or
just
in
general,
I
don't
know
if
anyone
has
any
updates
on
that.
One.
E
A
Is
it
worth
going
over
some
of
the
the
action
like
the
pending
ones
now
and
then
maybe
we
can
even
just
make
sure
they're
assigned
or
like
if,
if
you
see
ocean
road
want
to
jump
on
any
of
them
or
we
can
wait.
E
Well,
well,
there
are
like
a
couple
of
like
my
minor
things,
so
I
think
that
we
are
good
to
go
like
there
is
like
from
the
from
the
keda
2
perspective.
There
is
like
the
one
one
outstanding
issue
for
fortune
on
the
on
the
push
skills
for
four
skill
jobs,
and
then
the
rest
is
just
some
some
minor
things.
So
so
I
think
we
are
good.
E
Yeah
and
and
to
this
related
for
some,
like
the
feedback
for
from
the
beta
I've
seen
so
far
is
there
are
some
requests
for
the
qualification
around
around
the
scale
jobs.
So
maybe
it's
worship.
You
can
take
a
look
on
the
discussions
on
github
and
some
are
even
on
the
slack.
So,
okay.
E
A
A
When
can
I
use
this
safely
in
production
without
feeling
like
I'm
using
a
beta
product
so
and
on
that
note
I
was
thinking
around
the
end
of
this
month,
at
least
that's
kind
of
like
what
I
had
in
my
head.
I
don't
even
know
if
I've
vocalized
that,
before
tom
zibianek
anyone
on
the
college,
you
have
a
thoughts
on
like
what
our
goal
should
be
for
when
we
make
this
the
default
2.0
version.
A
A
So
maybe
yeah
if
we
can
close
on
these
ones,
to
yosha,
if
you're
able
to
fix
that
and
then
zubinek
either
flag
on
the
other
ones.
Maybe
when
we
meet
next
in
two
weeks
like
on
the
15th,
we
can
kind
of
do
the
go,
no
go
but
plan
on
kind
of
using
that
as
let's
do
this
and
then
you
know,
starting
on
that
day
or
shortly
after
we
could
start
to
roll
out
the
20
stuff
and
such
which
would
be
great.
Okay,
yeah
sounds
good,
okay,
so
back
to
the
agenda
stuff.
A
Actually
before
we
do
that,
because
usually
I
do
this
too,
I
didn't
when
we
were
doing
intros,
I
didn't
really
go
around
for
any
other
updates.
Does
anyone
have
any
just
kind
of
like
status,
stand
up
type
updates
of
things
that
they
made
progress
on
or
even
things
that
you'd
want
to
add
to
the
agenda?
A
A
A
Docket
for
azure
friday,
but
they're
they
have
a
different
azure
friday.
They
want
me
to
do
first
and
then
they
said
that
we
can
do
the
k
to
one.
So
I'm
trying
to
get
the
other
one
done
quickly,
so
that
I
can
get
to
the
k01.
Maybe
around
the
2.0
stuff,
we'll
see.
Okay,
great
anyone
else
have
anything
they
want
to
share
a
flag
or
just
a
status
update.
A
B
Waiting
for
alipapa
clouds
and
then
we'll
just
have
to
to
help
set
that
up
and
see
if
we
need
to
provide
some
content,
and
I
think
we
can
also
check
if
we
could
do
a
webinar,
maybe
as
a
reference
case,
how
they
use
data.
But
so
now
we
we
don't
have
to
do
anything.
B
A
Yeah,
I
agree:
that's
a
good
idea.
Okay,
great
I'm,
just
pausing
in
case
anyone
else
says
anything
they
want
to
flag
and
then
we
can
just
go
through
these
items.
C
I
got
one
go
for
travis,
for
I
got
a
pr
there
for
the
home
chart
which
has
incorporated
api
crd
changes,
but
there's
a
there's,
a
bug
introduced
in
helm3
in
regards
to
the
linting
and
the
pr
is
blocked
right
now.
Because
of
that.
So
I
was
wondering
if
we
can
at
least
temporarily
revert
to
an
a
prior
version
of
helm
for
the
lenten
asp
part
of
the
ci
checks
in
the
chart.
B
E
Oh
yes,
sir
certain
yeah
or
we
can,
or
we
can
even
like
rename
the
cluster
or
like,
if
also
fine,
if
it
is
needed
so
like
whenever,
whenever
like
option
is
the
like
the
easiest
one
or
like
the
more
convenient
I'm
okay
to
do
it.
C
A
A
I
also
I
have
no
concerns
with
that.
Okay,
awesome,
okay,
great
I'll
I'll,
make
a
note
to
do
that
right
after
this
call,
I'm
getting
backrow
noise
like
crazy
by
the
way,
apparently
it's
yard
work
day
but
enjoy.
I
can't
remember
how
to
do
this
stupid
action.
A
Okie
dokie
great
thanks
for
the
update
travis,
that's
a
good,
that's
a
great
progress
and
we'll
make
sure
to
unblock
you
there,
okay
tom,
I
think
we're
to
you
then,
on
this
proposal,
I'll
go
ahead
and
open
it
up.
So
folks
can
see
it.
A
A
But
I'm
fine
to
move
too.
I
really
don't
care
we'll
just
have
to
update
the
links.
A
Okay,
so
the
main
one
is
just
a
reference
to
these
agenda
items
where
we
have
other
agenda
items.
Yes,
no,
I
I
just
literally
copied
everything.
Oh
I
see
this
is
september
17th
I
so
the
action
item
is
the
discussion
is:
should
we
move
to
google
docs,
or
should
we
stay
on
hackmd
yeah.
A
B
I
think
how
it's
set
up
now
is
that
nobody
can
add
things
to
the
document,
but
they
can
do
comments
with
suggestions.
D
A
It
starts
getting
laggy
again,
though,
even
after
we
archived
a
bunch
of
old
stuff,
I
I
do
think
we
should
go,
because
I
did.
I
noticed
it
as
well.
Whenever
I
would
type
it
would
do
that
was
unusable
yeah.
It
was
pretty
bad,
okay,
great
so
I'll
pop
in
this
one.
I
had
a
conversation
with
a
customer,
a
microsoft
customer
right
now
who
is
building
a
massive
solution
heavily
reliant
on
cada.
They
wanna
use
cada
to
scale
like
100,
plus
pods
they're,
they're
kind
of
in
the
final
design
stage.
A
But
the
question
they
asked
me
is:
you
know,
given
the
fact
that
keda
is
a
single
operator,
what
guarantees
or
what
recommendations
do
we
have
around
making
it
more
highly
available?
And
so
I
kind
of
gave
like
a
generic
answer
which
was
like
well,
you
could
partition
by
namespace
so,
like
maybe
those
hundred
plus
pods,
like
ten
of
them,
are
in
one
name
space.
Ten
of
them
are
in
the
other,
and
then
you
could
have
multiple
operators,
one
per
name
space
and
you
just
tell
it
to
scope
to
scaled
objects
in
that
namespace.
A
They
were
asking-
and
I
actually,
I
think
I
got
confused
here,
because
I
seem
to
remember
an
issue
around
readiness
and
liveness
probes
so
that,
ideally,
even
if
the
operator
did
kind
of
crap
out
and
stop
responding,
that
kubernetes
would
detect
it
and
hopefully
recycle
it
and
clean
it
up.
However,
on
on
retrospect
I
actually
think
that
that
conversation
might
have
been
due
to
on
the
function
side,
making
sure
that
an
azure
function
container
has
readiness
and
liveness.
I
don't
know
if
we've
implemented
that
as
well.
A
So
it's
just
kind
of
a
discussion
topic
of
like
is
there
more
we
could
or
should
do
here
is
what
I
said
more
or
less
the
state
of
the
world.
Maybe
we
do
have
readiness
and
liveness
the
other
thing
like
they.
They
were
floating
around
they're
like
well.
Maybe
we
allow
multiple
replicas,
but
they
have
to
get
a
lock
and
so
like.
Even
if
I
have
two
replicas
of
the
operator,
one
of
them
is
more
or
less
idle
unless
the
lock
becomes
available,
and
then
it
immediately
comes
back
online.
A
E
Yeah,
I
was
thinking
about
the
the
same
like
like
recently
or
maybe
the
last
couple
of
months,
but
basically
a
current
state
is
that
the
arcade
operator
has
the
lightness
and
emergence
probe.
So
once
it
is
down
it
should
it
should
like
kubernetes,
should
automatically
scale
another
port.
So
so
spinning,
like
multiple
replicas
on
the
on
the
of
the
operator,
doesn't
make
any
sense
because,
as
you
said,
there
is
the
lock.
So
you
know
so.
The
the
other
replica
has
to
wait
for
releasing
the
lock.
E
But
I
I
would
say
that
the
bottleneck
of
cada
currently
is
is
the
metric
server,
because
we
can.
We
can
easily
deploy
multiple
operators
like
around
the
clusters
that
will
watch
just
for
particle
namespace
or
namespaces,
but
there
could
be
only
one
metric
server
in
the
cluster
and
we
are
relying
on
on
a
library
which
comes
from
from
kubernetes
for
extending
the
metric
server
which
currently
doesn't
allow
allow
us
to
deploy
multiple,
multiple
metric
servers.
E
I
have
an
idea
how
to
do
it,
but
still
like
it's
like
a
lot
of
work,
so
it
won't
be
that
easy.
But
this
is
the
the
only
only
solution
I
can
see
at
the
moment
so
yeah
they
could.
They
could
replicate
like
the
operators,
so
they
will
watch
particular
namespaces,
but
still
still
like
when
it
goes
to
like
the
hpa,
the
hpa
still
like
talks
to
the
metric
server,
so
we
will
need
to
scale
the
metric
server
as
well.
E
So
this
is
the
the
like.
The
critical
point
from
my
point
of
view-
and
the
other
thing
related
to
like
the
performance
overall
is,
is
the
other
proposal
I
had
because
currently
we
are,
for
instance,
if
you
imagine
like
that,
I'm
using
like
the
carcada
and
I'm
using
it,
for
example,
to
scale
deployment
that
is
consuming
kafka
or
the
kafka
topics.
E
Currently,
what
does
it
basically
opens
a
new
connection
to
kafka
every
like
every
request,
so
basically,
we
open
the
connection,
make
the
request
for
pull
the
metrics
from
the
kafka
and
then
close
the
connection
and
next
interval.
We
like
reopen
the
connection
and
do
it
once
again.
So
I
was
thinking
about
like
reusing
the
connection
like
pulling
it
internally
indicator,
so
this
this
could
help
as
well
like
from
from
performance
point
of
view.
E
I
have
opened
issue
issue
for
for
the
second
second
topic.
I
can
probably
link
it
over
here
so.
G
To
add,
I'm
just
curious
jeff
if
the
question
from
the
from
the
company
that
you
worked
with,
was
it
around
scale
or
was
it
around
fault,
tolerance
or
high
availability.
A
The
main
one
there,
the
main
one
they
were
asking
about,
was
around
like
how
do
we
make
sure
that
this
thing
doesn't
have
a
single
point
of
failure.
A
I
don't
know
what
the
largest
scale
cated
deployment
we
have
is,
but
like
these
do
make
me
wonder
like
if
they,
even
if
they
went
with
like
a
single
operator,
if
they
have
a
hundred
pods,
which,
with
arguably
a
hundred
different
event
sources,
are
these
other
things
going
to
spring
up
on
them
as
they
continue
to
go
throughout,
but
yeah
their
main
one
was
like
hey
what
happens
if,
whatever,
like
I
hit
a
like
my
code,
just
locks
up
or
my
container
just
locks
up
like
it's,
my
solution,
more
or
less
down
until
I
manually
go
reboot
the
thing
is
there
any
way
that
I
could
do
more
active,
active
replication
of
the
character,
integrator.
G
But
but
but
the
liveliness,
the
liveness
probes
that
sabonic
talked
about,
maybe
so
we
think
that
might
help
right.
A
Yeah
in
theory,
if
it
locks
up
that
as
soon
as
k
or
kubernetes
does
a
liveness
check,
it
would
be
like
oh
this
thing's
dead
and
then
it
would
recycle
it
itself
and
then
in
theory,
it
would
be
healthy
again
unless
we
had
some
larger
bug
at
play.
A
B
A
E
I
just
want
to
add
that
if
they,
if
they
like,
split
the
deployments
amongst
like
multiple
namespaces
and
will
deploy
multiple
operators
that
you'll
watch
for
the
particular
namespaces,
it
could
help
a
little
bit.
Because
if
one
operator
is
down,
it
will
like
affect
just
a
bunch
of
bunch
of
deployments.
E
Yes,
but
they
each
each
operator
has
to
watch
different
name
space.
So
if
you
have
like
two
operators
watching
the
same
same
name
space,
it
would
be
like
massive,
because
is
that
our
limitation
or
the.
E
Oh,
it's
basically,
the
limitation
of
how
operators
and
controllers
like
are
working
like
in
kubernetes,
because
what
they
are
doing.
They
are
basically
listening
on
the
kubernetes
api
and
when
there
is
some
change
on
some
object,
they
are
watching.
They
are
do
some
action.
So
imagine
you
have
two
operators
watching
the
same
namespace.
So
if
there
is
a
action,
for
example,
to
update
the
scaled
object,
both
operators
will
try
to
like
update
the
scaled
object
in
the
same
time.
So
it
will
be
like
you
know,
race,
conditioning
and.
A
A
It
feels,
and
so
we're
like
well
at
the
very
least
we'll
let
people
manually
partition
through
name
spaces
so
that
they
could
have,
I
guess,
lower
the
impact
and
not
have
all
100
in
this
case
relying
on
one.
But
it
is
something
that's
more
or
less,
only
documented
in
that
github
issue
and
not
somewhere
else,
and
so
that's
where
I
flagged
there
might
be
a
best
practice
here.
Okay,
this
is
fine.
I
don't
have
any.
A
I
guess
the
only
other
question
I
have
to
be
in
because
I
kind
of
chew
on
this
and
circle
back
with
the
customer
and
see
where
they're
at
the
single
metric
server
thing.
As
far
as
you
know,
is
the
limitation
solely
in
the
library
or
does
like
the
kubernetes
metrics
system
really
only
allow
one
metric
server
and
that's
why
the
library
only
allows
one.
E
E
Yeah
this
is
the
this.
Is
the
idea
I'm
having
like,
basically
that,
like
extending
the
the
metrics
adapter
library,
that
is
like
doing
the
course
from
the
kubernetes
api
and
it
will,
it
will
distribute,
distribute
the
requests
to
the
particular
like,
let's
say
deployments,
which,
where
each
will
do
the
call
for
particular
schedules.
Yeah,
that's
the
idea,
but
but
I
don't
want
to
make
this
option
like
or
like
keda
only
so
we,
but
we
can
we.
E
We
could
do
this
like
internally,
so
we
could
like
hack,
our
use
usage
of
the
metrics
adapter
a
little
bit,
but
there
are
still
like
problems
when
users
are
using
different
tools
that
are
using
the
same
library
for
extending
the
kubernetes
metrics
api.
So
I
guess
that
the
the
ultimate
goal
is
to
have
like
one
solution
that
will
allow
users
to
use
like
multiple
data
instances
in
the
cluster
and
at
the
same
time,
to
use
the
other
tools
that
are
consuming
the
same
library
yeah.
B
Would
it
make
sense
if
we
move
out
the
scale
of
logic
into
dedicated
containers
so
that
we
isolate
it
on
that
level
as
well?
So,
for
example,
or
there's
a
lot
of
let's
say
we
have
1000
containers
all
using
service
buzz,
and
then
we
have
10
of
them
using
kafka.
If
we
have
separate
containers,
the
load
on
the
kafka
would
be
less
than
service
built
so
that
if
service
bus
is
draining
the
whole
container,
kafka
is
not
impacted.
E
B
E
E
E
I
don't
think
this
is
like
beneficial,
because
if
the
user
doesn't
use
any
any
scale
like
or
is
using,
for
example,
just
the
azure
azure
scalers
and
they
don't
use
other
scalars,
it
doesn't
affect
the
codes
at
all
because
you
know
the
code
is
just
you
know
idle.
It's
not.
E
A
Yeah,
the
the
and
I
you
kind
of
hinted
at
it
to
me,
but
the
reason
I
was
curious
on
the
metrics
thing
is,
I
didn't
know
if
there
was
any
projects
that
are
probably
in
the
same
boat,
where
they're
relying
on
the
metric
server.
A
Like
I
don't
know
if
k
native
is
using
this
metrics
api
or
or
something
similar
to
that,
where,
like
maybe
we
just
reach
out
we're
like
hey,
is
this
something
that
you've
hit
too
and
like
best
case
they're
willing
to
tackle
it
and
maybe
throw
a
resource
or
two
at
it?
And
then
we
can
kind
of
try
to
figure
out
a
solution
that
helps
everyone.
But
I
don't
know,
I
don't
know
if
that's
yeah
or
whatever.
E
B
E
Yeah,
that's
exactly
the
case
native
is
not
using
it,
but
there
are
other
other
projects.
I
currently
forgot
the
name,
but
the
latest
one
was
probably
data
dock
or
something
like
this:
oh
okay,
yeah.
So
so
yeah,
I'm
planning
to
do
this.
So
I'm
planning
to
basically
like
sync
with
all
those
people
and
probably
with
the
kubernetes
with
the
respective
kubernetes
working
group,
because
I'm
not
sure
like
yeah,
we
should
we
should.
We
should
definitely
do
this
in
cooperation
with
the
other
folks,
yeah.
A
Sounds
good
yeah,
let
me
know
how
I
can
help
there
and
feel
free
to
loot
me
in
any
spot,
but
but
it
sounds
like
you've
got
a
good
handle
on
some
thinking
there
and
definitely
a
worthwhile
thread.
I
think
we'll
like
I.
I
mostly
want
to
validate
my
thinking
and
especially
making
sure
we
do
have
like
readiness
and
liveness
so
I'll
circle
back.
A
I
suspect
this
will
be
good
enough,
but
if
I
learn
anything
else
or
if
I
find
out
that
they
have
any
issues
doing
some
of
this
I'll
be
sure
to
bring
them
back
in
the
following
year
as
well.
A
But
this
is
like
this
is
a
a
a
workable
solution.
Right
now,
obviously,
there's
ways
we
can
improve
it,
but
I
I
think
it
will
work
for
them.
Okay,
so
that's
all
I
had
on
h.a
the
last
thing
we
have
in
our
scheduled
agenda.
Tom
is
some
stuff
around
road
map.
I'll.
Let
you
chat
about
this
topic.
B
Yeah,
this
question
comes
from
the
slack
where
people
are
asking
what
the
road
map
is
so
more
longer
term
roadmap.
Actually
we
don't
really
have
that
and
I
think
it's
maybe
good
if
we
come
up
with
a
sort
of
vision
of
gata
where
we
want
to
go
over
time.
B
A
Yeah,
I
my
preference.
I
do
think
it
makes
sense.
My
preference
is
to
keep
it
like
as
process
lightweight
as
possible
into
even
just
popping
in
like
a
project
here,
that's
like
road
map,
and
then
we
can
just
pop
in
like
a
few
of
those
high
level
things
that
we've
chewed
about
that.
We
know
we
kind
of
want
to
do
and
then
maybe
some
shorter
things
that
we
know
we
are
doing
in
the
in
the
time
frame.
A
And
then
we
can
just
kind
of
keep
track,
but
I,
I
think,
kind
of
keeping
it
at
the
more
epic
level.
Something
like
you
know,
high
availability
improvements,
and
we
maybe
the
the
item-
mentions
this
stuff
or
or
connection
reuse
might
be
another
big
one,
but
hopefully
not
as
granular
as
like
individual.
D
A
Operations
yeah
like
cluster
cluster
integration-
that's
something
I
could
see
is
like
I'm
trying
to
think
of
what
project
I
was
looking
at
the
other
day
that
had
a
road
map
that
was
kind
of
like
things
were
cons
like
things
that
we
like,
we
may
do
things
that
we
know
we
will
do.
We
just
don't
know
when
and
then
like
stuff
that
we're
doing.
D
Right
now
and.
A
The
stuff
that,
like
things
we
may
do,
could
be
pre
like
cluster
auto
scaling
is
one
of
those
things
that,
like
we
may
do,
I'm
cool
to
put
it
there.
I
don't
think
we've
committed
yet
that
we're
gonna.
Do
it
because
there's
a
bunch
of
questions
but
like
it'd,
be
good
to
track
there,
and
people
could
look
at
it
and
be
like
oh
yeah,
plus
one
on
this
or
or
whatever
else.
A
A
A
I
don't
know
it's
a
real
pain
in
the
butt
yeah
yeah.
I
think
this
is
good.
I
think
this
is
a
fine
pattern,
but
yeah
again,
my
my
vote
would
be
as
as
processed
light
as
possible.
D
A
Like
it's
easy
enough
for
us
to
just
spend
a
minute
or
two
in
a
stand-up
and
update
it,
I
I
even
suspect
our
roadmap
will
probably
have
a
fraction
of
the
things
that.
B
A
A
B
To
cncf
incubation
that
we
should
also
do
it
to
have
more
more
of
a
vision.
Let's
see.
A
So
tom,
do
you
want
to
take
a
stab?
Do
you
want
me
to
take
a
stab
at
just
creating
the
project
with
the
templates
and
if
I
guess,
it'll
only
take
a
few
seconds,
I
can
throw
in
a
few
things
that
at
least
I've
been
chewing
on
and
then
maybe
we'll
just
add
an
item
for
next
meeting
and
we
can
look
over
it
and
see
how
it
looks.
B
Start
with
the
with
the
draw
sounds
good.
I
won't
be
able
to
join
the
next
one,
but
yeah
I'll
see
what
the
outcome
is.
A
Good,
I
don't
know
I
I
do
think,
there's
a
few
things
that
are
kind
of
like
floating
around
in
our
collective
minds
that
even
just
throwing
down
on
this
project
would
help,
especially
for
folks
getting
introduced
to
this.
A
Sure,
and
and
like
I'm
okay,
either
way,
but
like
I've
almost
for
some
of
the
github
projects
I
use,
because
you
can,
you
can
like
directly
add
an
issue.
That's
here
or
you
can
just
kind
of
like
add
a
note
where
you
paste
in
other
issues
and
it
kind
of
loosely
links
them,
I'm
even
fine.
If
we
just
do
the
note
card
way,
it's
up
to
you
tom,
on
how
much
you
want
just
that
way.
We
don't
necessarily
have
in
two
spots.
It's
fine
either
way
it's.
That
again,
is
something
we've
done
from.
A
B
A
Yep
yep,
for
some
of
these
we
probably
have
an
issue
floating
around
and
for
other
ones.
It's
like
we'll
just
throw
something
on
the
board.
So
it's
tracked
somewhere.
Okay,
keta
2.0,
go
live
we'll
talk
about
next
week
and
then
the
road
map
review.
E
E
So
if
I
may
ask
everybody
like
if
they
can
basically
please
like
reserve
sometime
during
the
week
and
just
take
a
look
on
the
on
the
issues
or
discussions
or
select,
because
there
are
some
some
good
inputs
from
from
the
users
that
are
using
the
the
beta
so
basically
response
to
the
to
these
users.
So
so
we
have.
We
have
like
where
the
response-
and
it
is,
it-
is
beneficial
to
for
the
product
project.
I
guess
yep.
A
Yep,
that's
a
good
call
out.
I
might
actually
spend
the
because
it
looks
like
we'll
probably
have
some
time
back.
I
might
spend
the
rest
of
the
20
minutes
that
I
blocked
for
this
stand-up
and
go
review
some
of
them
myself,
not
a
bad
idea
for
others
on
the
call
too,
if
you
have
a
similar
calendar
thing,
obviously,
not
folks
in
europe
go
go
enjoy
the
rest
of
your
evening.
A
B
A
B
E
Yeah
and
jeff,
do
you
know
if
ahmed
is
is
like
on,
or
is
he
like
on
some
vacation
or
anything
like
that?.
A
He
is
here
I
mentioned
last
time:
he's
been
spending
a
bunch
of
cycles
on
figuring
out
how
to
integrate
cato
with
some
internal
ways
that
we're
doing.
I
was
chatting
to
honor
about
this
this
morning.
I
need
to
circle
back.
I
haven't
talked
to
him
in
quite
a
while
personally,
and
so
I
just
need
to
circle
back
and
understand
exactly
where
he's
at
and
what
level
engagement
we
should
do
and
so
that
between
android
su
yoshi
and
myself,
we
can
make
sure
we've
got
fair
enough
coverage
to
take
sure.
A
Yeah
I'll
also
go
back
honestly,
he
might
even
be
out
on
holiday.
At
this
point,
that's
the
downside
of
remote
work
is,
if
I
don't,
if
I
don't
like,
have
a
meeting
scheduled
with
someone
they're
more
or
less
invisible
to
me.
So
I
I
will
track
that
down,
though,
and
at
least
find.