►
From YouTube: CNCF SIG Runtime 2020-02-20
Description
CNCF SIG Runtime 2020-02-20
A
B
C
D
E
D
E
G
F
Go
ahead
and
do
that
so
that
I
can
show
some
of
the
slides
and
it's
good
to
see.
We've
got
honor
ed
from
Microsoft
here
tom
from
code
it
zippy
Nick
from
Red
Hat
I'm,
even
probably
missing
a
few
other
folks.
So
thanks
thanks
all
for
joining
I'd
help.
Answer
questions
as
well,
so
I
was
I
was
evaluating
why-why-why
videos,
things
mirror
my
screen,
but
it
looks
like
you
don't
see
a
mirrored
version
only
I
do
this
was
my
exploration
this
morning,
all
right
so
we'll
start
with
kata
event-driven
autoscaler.
F
This
is
something
that
we
are
making
it
we've
had
a
proposal
open
for
the
sandbox,
so
I
just
want
to
share
a
little
bit
about
what
it
is
and
then
answer
any
questions
that
you
might
have.
That
can
help
and
in
doing
this
previously
I
think
it's
mentioned.
We've
we've
made
a
similar
presentation
to
the
serverless
workgroup
to
a
few
folks
there
across
SAT
VMware,
whatever
else
and
that
went
well,
but
this
was
before
the
new.
The
new
kind
of
policy
happened
with
the
sig,
so
some
of
this
might
be
repeated.
F
If
you
were
at
that
presentation.
A
few
months
ago,
so
some
background
and
history
here,
kaida
was
initially
started
by
Microsoft
in
Red
Hat,
primarily
for
some
background,
so
I'm
Jeff,
I'm,
a
product
manager
of
Microsoft
Azure
and
when
I'm
not
focusing
on
open
source
and
kubernetes
stuff,
like
ADA
I'm,
helping
manage
and
run
the
Azure
function
service,
so
Azure
functions
is
Microsoft's
service,
offering
ican't,
ed,
AWS,
lambda
or
gzp,
and
one
thing
that
we
had
observed
as
a
team
is
that
we
had
developed
some
technology
to
help
run
functions
and
scale
them
effectively.
F
But
we
had
customers
and
users
who
are
interested
in
using
this
site
type
of
functionality
outside
of
Azure
and
and
so
we
kind
of
looked
at
the
kubernetes
ecosystem
in
general.
We're,
like
you,
know
what
there's
might
be
a
gap
here
in
terms
of
what's
possible
today
and
what
we
think
is
there.
So
we
we
talked
to
a
few
folks
and-
and
let
me
know
to
if
other
people
only
see
a
black
screen
thanks
Tom
for
the.
F
F
F
No
Pete,
no
okay,
great
thanks
for
flagging
tom
okay.
So
we
we
reached
out
to
a
few
folks
red
hats
and
some
of
the
folks
on
the
call
from
Red
Hat
we're
like
yeah
this.
This
sounds
interesting
to
do
this
event-driven
scale
way
so
kata
at
its
core
is
a
component
that
can
be
installed
in
any
kubernetes
cluster
that
will
enable
your
cluster
to
scale
pods
and
deployments
and
jobs,
even
not
just
based
on
CPU
and
memory,
but
based
on
metrics
that
are
being
pulled
from
the
event
source.
So
specifically
in
Azure
functions.
F
We
don't
just
scale
based
on
the
CPU
of
your
functions,
but
we're
actually
proactively
looking
at
the
queue
or
the
sequel
database
or
whatever
else
it
might
be
in
helping
really
rapidly
scale
your
your
functions
as
a
result,
and
so
kata
is
doing
something
very
similar
in
a
hopefully
very
seamless
way.
So
we
wanted
to
make
it
very
simple
to
wire
up
metrics
from
event
sources
and
plug
those
into
things.
F
Like
the
horizontal
pod
autoscaler,
we
wanted
the
ability
to
scale
down
to
zero
in
the
same
way
that,
as
your
functions,
users
work
into
scaling
to
zero
and
saving
resources.
We
released
this
last
April
around
this
time.
It
went
GA
1.0
at
cube
con
in
2019,
and
we
currently
have
about
20
scalars
to
different
sources
like
caca,
Postgres,
sequel,
Nats,
Prometheus,
a
bunch
of
sources
out
of
Azure,
AWS
and
GCP.
So
even
before
I
do
anything
else.
F
I
did
want
to
show
a
quick
demo
just
so
that
you
can
see
what
this
looks
like
this
takes
about
15
seconds,
so
I
have
a
kubernetes
cluster,
the
tardi
running
and
I
have
one
container
or
one
deployment,
that's
in
it
and
it's
a
RabbitMQ
consumer.
So
this
deployment
I've
said:
hey
it's
consuming
RabbitMQ
messages
and
the
one
thing
to
note,
because
Keita
is
installed
and
doing
all
of
its
stuff.
This
is
actually
scaled
all
the
way
to
zero,
because
Keita
has
let
kubernetes
know
there's
actually
not
any
Q
messages
here
to
consume
anyway.
F
So
you
don't
even
need
to
consume
the
resources
or
reserve
the
resources
to
run
this
thing,
because
there's
nothing
to
be
done
and
I
can
show
you.
If
I
just
look
at
the
k2
namespace,
you
can
just
see
it
just
there's
a
k2
operator
and
then
a
metrics
API
server,
that's
running
and
monitoring
the
stuff.
Now,
if
I
go
ahead
and
I
watch
the
pods,
this
is
my
RabbitMQ
server,
and
now
I'm
going
to
deploy
a
job
which
is
going
to
publish
a
thousand
messages
to
the
queue.
F
F
Oh
there
is
work
to
be
done
so
now
we
have
one
consumer,
that's
come
online
right
away,
but
what's
nice
is
that
even
before
that
sentence
is
finished,
you
can
see
because
it
wasn't
just
one
message
that
I
actually
dropped
in
thousands
of
messages
into
that
queue
that
very
rapidly
kata
is
actually
driven.
This
to
say,
hey
I,
actually
need
to
scale.
This
RabbitMQ
function
a
lot
to
make
sure
that
I
drain
this
really
rapidly.
F
So
this
kind
of
very
proactive,
very
event-driven
scale
is
what
kata
is
making
possible
and
if
I
waited
here
for
30
45
seconds,
it
would
finish
scaling
up
consuming
all
the
queue
messages
and
then
scale
all
the
way
back
down
to
zero
again.
So
that's
that's
kind
of
what
kate
is
doing
behind
the
scenes.
What's
making
it
work
is
one
of
our
core
fundamental
value
that
we
wanted
to
do
when
we
built
this,
that
we
set
from
the
get-go
and
we've
continued
to
stand
by
with
our
communities.
F
We
didn't
want
to
rebuild
anything
that
kubernetes
already
did
and
so
behind
the
scenes
how
it
works.
I
showed
you
there's
that
kate
operator
that's
running.
It
also
has
its
metrics
server,
which
connects
to
the
kubernetes
metrics
api's
and
then
there's
a
number
of
what
are
called
scalars.
Those
are
all
the
different
event
sources,
I
mentioned
Mike,
there's
a
rabbit
and
q1
a
Kafka
one.
A
post
grows
one
a
Prometheus
one,
whatever
about
20
of
them,
and
you
end
up
having
your
event
source
in
the
case
of
my
demo,
was
rabbitmq
I.
F
Think
in
the
rest
of
the
slides,
it's
going
to
assume
that
it's
Kafka,
and
so
then
you
just
deploy
you
create
a
deployment
like
you,
normally
would
so
I
just
deployed
using
a
kubernetes
deployment
and
then
there's
a
special
CRD.
The
Keita
exposes
called
a
scaled
object,
and
this
is
really
the
metadata
of
where
you
map
your
deployment,
where
you
map
your
job
to
the
event
source
that
you
care
about.
F
So
in
this
case
I'm
saying
hey
it's
my
deployment
that
I
care
about
and
I
want
you
to
scale
based
on
Kafka,
so
here,
I
provide
a
little
bit
of
metadata
for
Keita
to
use.
I
can
configure
things
like
how
frequently
should
K
to
check
to
see
if
there's
messages
to
be
processed,
I
can
also
configure
things
like
minimums
and
maximums.
Maybe
I
never
want
to
scale
all
the
way
down
to
zero.
Here,
I
define
my
K
I'm
interested
in
Kafka.
F
Here's
how
to
connect
to
Kafka
I
can
set
whatever
info
I
need
to
there
based
on
the
event
source
and
even
some
values
here,
like
in
this
case
with
Kafka,
there's
something
called
the
lag
threshold
which
is
more
or
less
setting
the
target
for
scale.
So
in
this
case,
50
is
saying
for
every
50
unprocessed
messages
in
Kafka
I
want
to
target
about
1
replicas,
and
so
if
there
were
a
thousand
messages,
it's
gonna
try
to
do.
What
is
that
50
replicas,
but
if
there's
only
50
messages,
it's
only
gonna
target
about
1.
F
F
Ahead
and
apply
that
to
your
cluster
kada
picks
up
that
scaled
object.
The
k2
operator
knows
about
the
scaled
object
and
you
can
see
in
the
case
of
my
slide,
I've
even
graded
out,
because
it's
like
hey
I,
can
scale
this
thing
to
zero.
Now
because
I
know
the
Kafka
event
source
is
empty.
Kate
is
just
doing
this
by
wiring
everything
up
automatically
for
you
to
the
HPA,
so
it's
not
using
its
own
autoscaler.
It's
just
augmenting
the
existing
kubernetes
ways
to
do
this.
F
If
a
message
pops
in
oh
yeah
during
this
whole
process-
and
now
it's
just
up
to
Keita-
to
constantly
be
asking
how
many
events
are
being
generated,
so
it
asks
Kafka,
every
polling,
interval
and
says:
hey:
are
there
unprocessed
messages
and
if
the
answer's?
No,
then
it
keeps
this
thing
scaled
down
with
the
answer
ends
up
being
yes,
and
just
like
I
showed
you,
you
watch
a
pop-up
and
then
potentially
scale
out
very
rapidly,
so
a
few
key
features
kind
of
based
on
the
demo
in
the
architecture.
F
You
can
scale
any
deployment
or
job
based
on
event
metrics
by
defining
that
additional
CRD
and
we're
just
using
kubernetes
yardies
to
drive
the
experience
that
you
scaled
to
and
from
zero
based
on
events
back
and
forth.
It
has
20
events
or
scalars
built
in
it's
completely
extensible.
This
is
the
largest
area
of
contribution
and
interest
that
we've
seen
are
people
adding
these
additional
event
sources
I
mentioned
kind
of
in
passing.
You
can
also
say:
hey,
maybe
I
have
a
long-running
job.
F
Maybe
every
cue
message
isn't
just
a
simple
order:
I
need
to
process,
maybe
it's
a
video
I
need
to
transcode,
and
so
you
can
actually
use
a
scale
to
object
mode
where
you
say
create
a
kubernetes
job
for
every
event
that
comes
in,
which
is
a
very
useful
model.
There's
ways
to
define
authentication.
So
we
have
ways
to
integrate
with
secrets
with
other
sources
as
well.
F
You
can
use
pod
identity
if
you're
connecting
to
a
cloud.
I
did
a
cloud
provider
so,
for
instance,
if
you're
using
like
the
Azure
queue,
scaler
kata
integrates
with
as
your
pod
identity,
and
so
you
don't
even
have
to
pass
in
a
password.
It's
just
gonna
use
its
own
identity
to
authenticate
their
support
for
that
in
AWS
as
well,
and
really
this
is
about
letting
you
focus
on
your
app
and
not
have
to
worry
about
the
scaling.
Internals
manually,
wiring
up
the
custom
metrics
doing
the
work
to
do
this
manually.
F
Kata
just
makes
it
as
easy
as
defining
that
scaled
object.
So,
in
terms
of
community
we've
been
really
happy
and
pleased
with
the
amount
of
energy.
That's
happened
around
kata
in
its
time,
so
we've
got
about
2,000
stars
on
github,
a
number
of
contributors.
This
is
across
large
corporations
as
well
Microsoft
Red
Hat
IBM
code
at
astronomer
I/o.
This
is
just
the
few
that
I
pulled
off
the
top
of
my
kind
of
stand-up
sheets.
There's,
there's
much
more.
We
have
weekly
stand-ups
so
from
the
get-go.
F
Kata
has
there's
nothing
in
kata:
that's
branded
Microsoft
or
branded
Red
Hat.
This
is
something
that
we've
wanted
to
be
community
driven.
So
we
have
weekly
stand-ups
on
zoom'.
We
actually
have
one
coming
up
in
about
three
hours:
there's
a
website
that
has
a
list
of
all
the
scalars
and
a
few
users
who
are
using
it
across
their
solutions
to
help
add
some
more
stuff.
This
was
nice
I.
Just
noticed
when
I
was
preparing
a
presentation,
there
was
even
some
folks
who
were
just
tweeting
like
oh,
hey,
Keita.
This
actually
looks
super
interesting.
F
This
looks
like
what
we're
looking
for
and
then
Richard
chimed
in
it
was
like
yeah.
We've
actually
been
using
this
in
production
for
a
while
now.
So
it's
very
simple
like
we
didn't
want
to
make
this
a
full
complex,
doing.
80
things,
it's
really
just
driving
that
event-driven
scale,
but
it
does
that
very
well,
so
the
last
slide
I
have
is
in
terms
of
like.
Why
are
we
interested
in
the
CNC
F
I
mentioned
already
with
kada?
F
Our
intent
wasn't
to
reinvent
the
wheel,
but
it's
really
building
on
those
standards
and
building
on
those
technologies
that
are
being
developed
in
the
CNC
F
like
kubernetes,
so
it
makes
it
a
natural
home.
Our
intent
has
always
been
to
do
this
open
and
community
driven
while
it
started
with
Microsoft
and
Red
Hat
in
a
partnership.
We
really
want
to
make
this
vendor
neutral
in
every
way
possible.
We
feel
like
donating
it
to
a
foundation
with
CNC.
F
is
a
way
to
show
that
good
faith
with
the
community.
F
It's
already
MIT
license
where
we're
planning,
if
this
becomes
sandbox
to
use
things
like
the
CNC
F
CL
CL
a
contributor
license
all
those
things
there's
there's
no
kind
of.
We
still
want
to
hold
on
to
this
that
or
the
other.
This
is
really
our
intent
to
say.
We
feel
this
is
a
useful
piece
of
tech.
We've
been
using
tech
like
this
to
run
the
Asscher
function
service.
This
has
been
in
the
open
now
for
a
while,
and
we
just
want
to
go
all
bender
neutral
now.
F
Kata
also
integrates
very
seamlessly
with
a
number
of
other
CNCs
projects.
Things
like
the
virtual
cubelet
to
scale
out
into
virtual
nodes
that
scalars
Prometheus
scalar
streams,
easy
scalars
helm,
is
the
way
we
used
to
deploy
it
and
we're
really
looking
for
that
vendor-neutral
home
for
a
key
service
capability,
specifically
in
the
server
l'espace
I.
Think
serverless
has
this
connotation
of
being
very
vendor
locked
in
and
there's
been
some
heated
discussion
about,
CN,
CF
and
service
in
general.
We're
really
hoping
that
kata
can
be
one
of
those
very
nice
pieces
of
service.
F
In
addition
to
things
like
cloud
events
that
would
tie
in
very
neatly
with
the
CN
CF.
So
that's
all
I
really
wanted
to
share
I'll
stop
sharing
here,
I
saw
yeah
a
few
comments.
I,
don't
I,
think
most
of
that
is
handled
so
I
think
I'll
just
pause
here.
If
there's
any
questions
or
anything
that
you
could
use
from
us
I'm
more
than
happy
to
share
more.
F
F
We
have
a
way
to
make
them
all
external,
and
this
is
actually
something
that
we've
had
discussions
about,
even
as
recently
as
last
week,
stand-up,
which
is
like
there's
a
world
that
you
deploy
kada
and
that
you
kind
of
like
check
all
the
boxes
for
all
the
scalars
that
you
want
and
now,
instead
of
getting
just
those
two
pods
now
you
have
15
of
them
and
each
one's
doing
its
own
scaling
thing,
but
we
didn't
want
to
make
it
too
over
overloaded.
So
far,
so
right
now
the
majority
of
them
run
in
the
shared
process.
F
We
do
have
ways
for
you
to
plug
in
external
ones,
there's
a
few
that
only
run
externally,
and
this
is
something
that
we're
still
kind
of
evaluating
with
to
make
sure
that
we
don't
get
the
footprint
too
large.
We
need
to
start
to
version
these
more
independently,
so
we
have
the
capability
there.
There's
some
scalars
that
take
advantage
of
it,
but
mostly
for
convenience.
We
ship
most
of
them
just
in
the
same
process.
Today,
I.
E
Mean
that's
not
necessarily
a
bad
thing,
just
because
I
mean,
if
you
end
up
having
multiple
processes
for
each
scalar,
then
you're
using
more
resources
but
but,
like
you,
said,
they're
lightweight
right.
So
so
maybe
yeah.
If
you
add
or
more
processes
there,
then
it
wouldn't
affect
too
much
other
yeah.
F
Great
and
I
see
one
question
in
chat
from
Jay
real,
quick
integration
with
cluster
autoscaler,
not
just
the
HPA,
so
there's
nothing
directly.
We
do
with
cluster
autoscaler
as
far
as
I
understand,
though,
and
I
invite
others
to
chime
in
how
the
cluster
autoscaler
will
work
is
it
will
look
at
what
the
HPA
is
scheduling
and
the
resources
that
it's
trying
to
schedule
and
then
based
on
that
it
can
scale
the
cluster
so
I
believe
indirectly,
kathe
would
cause
your
cluster
to
scale,
because
Kate
is
gonna,
be
telling
HPA,
you
need
to
add
more
resources.
F
The
HPA
is
gonna,
be
scheduling
those
and
then
at
one
point
the
scheduler
is
gonna,
be
I.
Don't
have
the
space
to
put
all
these
things,
that
Kate
is
telling
me
to
schedule,
and
that
would
kick
in
the
cluster
autoscaler,
which
would
then
scale
my
entire
cluster
so
I
believe
they
work
directly.
This
is
a
common
question,
though
so
I
am
pausing
a
little
bit
in
case
if
your
neck
or
others
wanted.
G
H
One
thing
that
we
hear
a
lot
in
cluster
autoscaler
is
support
for
more
predictive
or
scheduled
based
scale-up.
If
events
so
I
was
thinking,
you
know,
maybe
since
SCADA
seems
quite
flexible
in
that
regard.
In
in
this
wide
choice
of
sort
of
the
events
that
can
trigger
an
action,
I
was
thinking.
Maybe
it
would
be
possible
to
integrate
Kaito
with
cluster
autoscaler
and
use
the
kata
event
sources
as
the
triggers
for
cluster
autoscaler,
instead
of
just
the
pending
pods
queue,
but
just.
F
Makes
it
it
honestly
makes
a
ton
of
sense,
I
mean
just
briefly
looking
at
the
cluster
autoscaler
stuff,
and
it
does
look
that
they
are
some
metrics.
Maybe
these
are
metrics.
It
exposes,
but
yeah
I,
think
that
makes
a
ton
of
sense
and
even
to
your
point
of
scheduling,
one
of
the
work
streams
that
we've
been
funneling.
Some
resources
into
recently
is
kind
of
along
the
like
I
mentioned
it's
predictive
in
the
case
or
it's
less
reactive,
because
it
can
actually
see
that
hey
there's
a
thousand
messages
in
the
queue.
D
H
F
Go
and
crank
up
the
cluster
autoscaler
quake
the
HPA
in
in
advance.
So
I
I
think
that
makes
a
ton
of
sense
I'd
be
interested
to
know
what
integrations
exist
today
to
do
the
cluster
auto-scaling
stuff,
but
there's
nothing,
there's
nothing.
Fundamental
die,
okay,
two
works.
That
would
prevent
any
of
that.
So
I
think
that's
all
within
the
line
of
thinking
of
how
we've
been
approaching
Keita
as
well
great
conversation,
you.
H
Generally,
I'm,
referring
to
being
able
to
change
the
trigger
for
scale
up
from
the
single
thing
that
currently
is
supported
by
cluster
autoscaler,
which
is
that
there
are
pending
pods
in
to
be
scheduled.
There
are
alternate
events
for
cluster
autoscaler
scaling
down,
so
you
can
do
like
custom,
metrics
and
stuff
like
that
to
trigger
a
scaled-down
event,
but
for
scaling
up
meaning,
you
know
increasing
the
number
of
worker
nodes
in
the
auto
scaling
group,
there's
only
one
event
trigger
as
far
as
I
know.
So
that's
that's
why
I
was
referring
to
okay.
E
You
know
with
more
resources
that
you
actually
want
and
then,
but
but
also
you
when
just
before
you
scale
down
you
want,
you
want
to
know
if
there's
something
maybe
or
coming
up
or
some
event
coming
up
in
the
next,
maybe
ten
minutes.
So
you
want
to
keep
your
cluster
up
and
running
because,
let's
say
if
it's
ten
minutes,
your
event
comes
in
and
you've
already
scaled
down,
but
now
you're
scaling
back
up
right.
So
we
end
up
kind
of
thrashing
in
a
way
right.
E
B
So
so,
how
it's
paid
up
for
now
is
that
it's
purely
focused
on
the
application
auto-scaling,
where
we
then
rely
on
the
club's
the
photo
scaling
to
make
sure
that
there
is
enough
capacity.
But
maybe
we
shouldn't
need
also
have
a
look.
If
we
can
help
on
the
the
cluster
side
of
things
but
yeah,
we
don't
really
have
a
LAN
and
they
see.
There's
a
nice
question
for
you,
Jeff
on
gay
native.
F
Thank
you,
I
was
busy
creating
a
github
discussion
issue
around
this
conversation.
Okay,
any
relationship
with
key
native.
So
a
few
things
here,
that's
that's
worth
noting
in
in
general,
I
think
the
short
answer
is
K
native
is
a
entire.
The
idea
of
it
is
to
be
an
entire
service
platform,
so
it
comes
with
it
does
about
20
things
out
of
the
box.
Kata
is
just
a
very
single
purpose
thing.
That's
like
I'm,
just
gonna
be
doing
event
a
scaling
based
on
this
kind
of
pattern
that
I
talked
about.
F
F
So
one
of
the
work
streams-
that's
kicked
up
as
a
result
of
the
last
cubic
ongoing
window
to
actually
have
an
active
work
stream
with
the
K
native
group,
we're
looking
at
ways
that
they
can
leverage
kada
within
K
native
to
add
some
additional
functionality,
for
example
in
K
native
there's
a
way
to
get
event
notifications
when
there's
a
kafka
message,
there's
a
pull
request
right
now
in
the
K
native
repo
that
says
hey.
If
we
actually
took
a
dependency
on
kada,
we
could
scale
that
thing
down
to
zero
when
Kafka
is
empty.
F
In
addition
to
that,
if
that,
if
that
helps,
and
thank
you
J
for
flagging,
the
AWS
team,
I
might
ping
you
afterwards,
there's
there's
someone
on
our
side
to
who's
interested
in
some
of
the
deeper
kubernetes
integrations.
So
I'm
gonna
see
what
we
can
do
with
this
cluster
one
so
hopefully
across
that
that
answers
on
the
kid
native
stuff
to
let
me
know
if
there's
any
other
questions
there.
F
F
Is
that
everything
that
it's
scaling
is
an
HTTP
request,
so
it's
either
gonna
be
a
cloud
event
over
HTTP
or
an
HTTP
or
g
RPC
request
from
an
application
or
from
a
client
or
from
overalls
soak
a
native,
primarily
optimizes
scaling
on
today
by
looking
at
things
like
concurrency
of
HTTP
requests
and
then
driving
scale.
That
way,
there's
this
thing
behind
the
scenes
as
well:
that's
taking
kafka
events
and
turning
them
into
cloud
events
over
HTTP
or
g
RPC.
They
cater
might
scale.
F
So
kada
approaches
things
slightly
differently
and
the
kada
does
not
look
at
HTTP
requests.
Kate
is
actually
looking
at
the
end
event
source,
so
kada
will
look
at
Kafka
or
RabbitMQ
or
Prometheus
or
whatever
else
in
Drive
scaling
that
way.
So
you
can
auto
scale
today.
I
think
the
reason
that
there's
interest
from
both
sides,
both
Canada
and
kada
and
understanding
how
we
can
bring
things
together,
is
that
the
differences
in
the
trade-offs
between
both
of
those
models
of
scaling
only
on
HTTP
and
scaling
based
on
the
event
source,
have
some
differences.
Both
are
valuable.
F
So
the
kind
of
long
answer
to
your
question
is
you
can
do
auto
scaling
and
K
native
today,
but
there
are
ways
that
you
cannot
auto
scaling.
K
native
today
the
kata
can
enable-
and
the
K
native
team
is
interested
in
lining
that
up,
and
there
are
a
lot
of
caves
in
that
sentence.
So
it's
a
tongue
twister.
I
Actually,
I
could
also
speak
to
that
one.
A
little
bit
so
I
was
originally
so
I
work
on
Apache
airflow
and
originally
we
were
looking
to
use
K
native
as
our
auto-scaling
system,
and
what
we
found
was
that
for
long
running
tasks,
k
native
is
kind
of
not
an
optimal
solution,
because
you
have
to
keep
an
HTTP
request
open
the
entire
time
that
a
task
is
running.
So
we
were
able
to
find
that
kata
was
a
lot
better
suited
for
a
more
asynchronous
or
worker
based
auto
scaling
system.
F
Thank
You
Daniel,
perfect
yeah.
That's
that's
an
example
too,
of
like
the
when,
when
everything's
HTTP
based
versus
the
events
for
his
base,
that's
one
of
those
trade-offs
is
like
long-running
becomes
a
lot
harder
when
you're
trying
to
hold
open
an
HTTP
request
for
20
minutes
or
whatever
you
might
need
it
or.
I
C
Yep
yeah,
that's
great!
So
if
we,
if,
when
user,
how
such
can
awful
scenario
so,
for
example,
they
would
like
to
scale
out
their
deployment.
If
it
sounds
some
you
wench
Stoli,
they
may
didn't
know
the
details
of
their
for
Kiera
or
quenette.
Hey,
you
know
so
hours.
You
know
the
or
feature
is
similar.
I
I
know
there
may
be
some
technical
detail
may
be
different
and
so
forward.
So
wasn't
wait
was.
Was
your
suggestion
to
this
user,
which
kills
issue
to
use
Keaney,
Kida
or
which
case
he
should
use
a
key
net
he'll?
F
Yep
yep
and,
if
I
understand
correctly,
just
to
kind
of
repeat
the
question:
it's
with
K
Natives
since
you're,
connecting
directly
to
the
event
source,
the
developer
has
to
have
knowledge
of
that
event
source,
and
so
what
do
you
do
in
the
case
where
you
don't
want
to
have
that
direct
knowledge
of
the
event
source
I?
Think
there's
a
few
different
answers
to
this
one
as
well.
I
think
my
initial
thinking,
too
is
cloud
events.
F
I
feel
like
it's
a
very
good
way
that
this
has
been
solved
even
in
the
CNC
F,
but
being
able
to
say
hey
at
the
end
of
the
day.
We're
just
gonna
have
events
that
you
can
subscribe
to
and
scale
from
that.
Kata
could
help
scaling
that,
indirectly,
through
some
of
the
things
like
metrics,
of
the
amount
of
cloud
events
that
are
being
generated,
but
in
some
ways
I'd
almost
say,
there's
there's
a
few
ways.
This
could
be
something
so
K
Native.
F
Definitely
can
do
that
part
like
a
native
eventing
specifically,
is
all
about
letting
you
subscribe
without
having
any
and
knowledge
of
the
event
source,
there's
a
way
that
the
azure
functions
service,
like
that,
we
built
our
service
service,
where
we
abstract
that
within
the
SDK
that
you
actually
deploy
so
there's
something
called
the
azure
functions.
Runtime
it
abstracts
a
lot
of
the
details
of
the
underlying
event
source
and
enables
you
to
just
write
code,
but
it's
the
container
itself,
that's
doing
the
abstraction.
It's
not
some
cloud
event
behind
the
scenes.
F
Then
the
final
answer
is
just
cloud.
Events
in
general,
so
I
think
there's
kind
of
three
ways
that
you
can
cut
it
different
SDKs
like
the
azure
functions,
runtime,
Kay
native
and
Cana
dementing,
or
at
the
end
of
the
day,
just
using
cloud
event.
So
it's
a
trade-off
because
it's
one
of
those
things
too.
This
is
this
is
one
of
the
interesting
discussions
in
the
service
community
in
general,
which
is
there's
a
push
for.
F
How
much
do
you
extract
away
the
underlying
event
source
from
the
developer
and
the
benefit
of
it
is
the
developer
doesn't
have
to
worry
about
the
underlying
of
that
source,
but
the
downside
of
it
is
the
developer,
can't
take
advantage
of
something:
that's
not
a
common
denominator
across
multiple
event
sources,
so
I
think
both
are
important.
Sometimes
I
want
to
know
it's
a
Kafka
stream
that
I'm
connected
to,
because
I
need
to
checkpoint
and
I
need
to
do
in
order.
Processing
and
I
need
to
do
things
that
are
Kafka
specific
other
times.
F
I
just
know
that
there's
a
notification
that
happens
to
be
coming
from
Kafka,
but
I
just
need
to
post
something
to
slack
cloud.
Events
works
great
there,
so
there's
room
for
both,
which
is
why
I
think
there's
room
for
both
the
k2
style
and
the
cloud
and
Canada
style.
I,
don't
think
it's
going
to
be
a
one-size-fits-all.
C
F
Think
I
know
one
of
the
someone
doing
big
data
stuff
today
is
using
kada
but
they're
using
it
before
the
like
the
spark
layer
and
they're
scaling
based
on
I
believe
in
their
case,
I
Kafka
is
the
theme
that
is
filling
up
all
of
the
data
and
then
they're
using
Splunk
to
process
those
events
that
originated
from
Kafka,
but
then
are
going
through
different
pipeline,
so
they're
using
kata
to
grab
it
from
the
Kafka
side
and
not
from
Splunk
directly.
So
there's
nothing
to
right.
F
E
B
E
So
if
you
can
create
a
PR
with
recommendations,
I
mean
from
I
mean
based
on
what
I
see
from
years
ago
for
sandbox.
So
I
don't
really
have
any
major
concerns,
but
there's
a
template
that
we
follow
right.
So
I
can
show
you
what
the
template
is
and
I
think
there's
another
PR
that
that
we
did
for
volcano
and
then
so.
E
E
D
B
F
B
F
The
PR
for
volcano-
because
I
know
that's
the
one
you
reviewed
two
weeks
ago-
do
you
open
this
pore
requesters
this
something
that
the
the
leads
have
sig
runtime
need
to
fill
out
this
template,
and
then
we
open
the
pr
and
then
you
folks
can
add
comments
and
then
close
it.
They
say
the
ones
they
did
in
terms
of
it's.
It's
more
or
less
kind
of
our
PowerPoint
presentation
and
markdown,
which
is
great,
like
we've
got
the
info,
but
I
don't
know
if
we
need
to
put
this
together.
Yeah.
E
F
F
E
So
yeah,
hopefully
they
can
put
it
up
for
vote
before
cube,
calm,
yeah,
we'll
see
we'll
see
what
I
mean.
Sometimes
so
the
TC
just
got
new
three
new
members,
so
you
might
actually
need
more
votes
now,
sir,
but
oh
no,
wait.
Wait!
Sorry
I'm
talking
about
graduation,
my
bad,
my
bad,
my
bad!
So
for
for
sandbox
you
you
need
three
sponsors.
That's
right
remove!
So
so
there
won't
be
a
vote
right.
So
so
you
need
to
find.
So
after
we
do
the
recommendation
and
we'll
fill
in
the
pr.
E
Then
you
find
three
sponsors
in
the
TOC
and
then
they'll
they'll
basically
say:
okay,
we're.
We
want
this
project
to
be
in
sandbox
and
and
they
take
it
from
there
and
then
they
put
it
in
sandbox.
But
I
don't
know
if
it
really
needs
a
vote.
I
know
the
graduation
needs
a
vote
right,
but
because
we
we
were
don't
harbor
right
now
for
graduation,
but
yeah.
E
F
Do
you
want
us
to
do
you
want
us
to
reach
out
because
I
know
one
of
the
things
that
I
know
it's
a
new
process
initiative
they're
like
hey?
If
you
go
through
the
cig
process,
that
way
you
don't
have
to
just
go
and
poke
TOC
members
directly
once
this
goes
through.
Do
you
do
you?
Is
it
best
if
we
do
kind
of
go,
find
three
members
of
the
TOC
and
we're
like
hey
we
presented
for
cigarette
time,
here's
the
PR
that
got
closed.
They
gave
us
the
recommendation.
E
E
We
can
help
out
too
and
in
terms
of
finding
some
more
people
if
you
need
more
sponsors
based
on
our
recommendation,
but
you
you
can
also
go
in
and
in
in
contact
some
of
the
TLC
members
and
in
you
know,
and
they
can
look
at
the
presentation
just
recording
and
and
based
on
that
you
know
they
make
a
decision
saying
whether
they
want
to
sponsor
the
project
or
not
right.
So,
okay,.
E
E
E
And
then
Harbor
has
already
completed
the
review
from
sacred
time,
and
so
that
will
be
set
because
it's
a
graduation.
So
that
would
be
a
TOC
vote
right,
so
Michael
Michael
will
he
needs
a
review
from
sick
storage
and
I?
Don't
know
if
you
need
to
review
from
sick
security,
but
after
that
them
it
will
be
sent
out
for
a
TOC
vote.
E
Thank
you
for
joining
yes,
so
if
you,
if
you
want
to
present
anything
or
anything,
you
want
to
add
anything
edge
and
topic,
you
know
feel
free
to
add
it
to
the
dock.
So
we
meet
every
two
weeks,
not
exactly
and
the
first
and
the
third
Thursday
of
the
month,
like
so
any
item
that
you
want
to
add,
feel
free
to
add
it
there
and,
and
then
we
can
discuss
in
in
the
meetings
to
be
a
presentation
or
any
concerns.
Any
concerns
about
projects.
I
know
I
mean
we're
just
getting
started.
E
This
is
you
know
it's
been
around
for
maybe
month
and
a
half
six
security,
for
example,
has
a
lot
of
other
stuff.
You
know,
for
example,
they
have
security,
security
reviews
or
projects,
but
then
that
I
mean
that's
outside
the
scope
of
this
group,
but
anything
may
be
related
to
runtime
review
or
you
know,
maybe
a
I
type
of
workloads,
high
performance
type
of
workloads.
You
know
it's
it's
within
the
scope
of
the
book.
E
E
So
there's
plenty
of
documentation
there,
how
the
sandbox
incubation
and
graduation
process
works,
and
then
you
know
how
you
take
it
up
to
the
six
now
so
used
to
be
before
that
it
would
the
projects
who
got
directly
to
the
TOC,
but
the
reason
they're
creating
this
six
is
because
there
there's
a
lot
more
projects,
so
they're
trying
to
scale
and
into
different
areas.
So
obviously
they
have
run
time
now
they
have
observability.