►
Description
Accelerating KNative, Like Never Before - Yafang Wu, HUAWEI
KNative is the most popular Serverless project in the Cloud Native world today, as KNative has some terrific feature e.g portable when compare with other Serverless platform. At HUAWEI CLOUD, we build our serverless platform based on KNative, there're tens of thousands of workloads running on it now. When we're building this platform, we found that improving performance and minimizing operational overhead are the key challenges. In this sharing, we will go over: 1)Minimize memory overhead when you use KNative. 2.)Improve the performance of KNative Ingress Dataplane.
A
Hello,
everyone
welcome
to
math
session.
This
is
javon
from
huawei
cloud.
Kinetic
is
the
most
popular
serverless
projector
in
the
cloud
native
world
today,
as
canadian
has
some
terrific
future,
for
example
portable
when
compared
with
other
service
platform
at
huawei
cloud,
we
built
a
service
platform
based
on
canadian.
There
are
tens
of
thousands
of
workloads
running
on
enough
when
we
are
building
this
platform.
We
found
that
improving
the
performance
and
minimizing
the
operational
overhead
are
the
key
challenges
in
this
sharing.
A
A
So
this
is
a
method,
enables
resources
to
be
used
as
an
opec,
fortunately
unlimited
in
the
shareport
that
is
continuously
available
without
otherwise
provisioning
and
the
pricing
in
the
units
of
the
compute
consumed.
The
active
service
boss
is
a
part
of
the
service
world,
but
it
is
not
the
whole
world.
A
As
you
can
see
from
this
slide,
some
vendors
have
released
multiple
serverless
service
containers,
multiple
servlets
container
service,
for
example,
aws
target
and,
like
a
google
cloud
run,
the
main
concern
about
servlets
of
us
is
when
they're
locked
in
and
the
kinetic
is
designed
to
eliminate
it.
So
water
is
connected.
A
A
Auto
scaling
even
can
scale
to
zero
and
revision.
Tracking
teams
can
only
focus
on
core
logic.
Using
any
programming
languages
in
one
team
condensed
contains
universal
subscription,
delivery
and
management
of
influence.
It
builds
modern
apps
by
attaching
compute
to
a
data
stream
with
declarative
inventor
connectivity
and
the
developer,
and
the
developer
friendly
object
models.
A
A
Huawei
cloud
also
open
sources,
its
capabilities
in
the
cloud
native
field
to
a
diverse
range
of
industries
such
as
cooperage,
volcano
and
commander.
Now
we
are
building
a
service
platform
based
on
canadian.
We
met
many
challenges,
for
example,
and
so
on.
In
this
sharing,
we
will
only
focus
on
two
of
these
challenges:
memory,
overhead
and
the
performance
loss.
A
So
now,
let's
look
at
forecaster
about
two
challenges.
The
architecture
of
connective
survey
is
shown
in
the
figure
the
user
signed,
a
request
to
the
online
gateway
and
the
right
gateway
forwards.
The
request
to
the
corresponding
code,
according
to
the
routing
rules,
initial
serving
provides
automatically
scaling
for
applications
to
match
incoming
demand.
The
auto
scaler
will
collect
specific
metrics
from
hosts
to
make
skill
decisions
in
canadian.
A
A
A
A
native
serving
provides
two
ways
to
configure
the
results
request
of
q
proxy.
The
first
is
fix
the
configuration
in
this
way.
All
cube
proxy
will
have
the
same
results,
request
configuration
and
the
second
is
proportional
configuration
in
this
way.
The
required
results
of
q
proxy
is
determined
by
user
container
instance.
A
For
example,
if
we
citing
50
the
user
container
requests,
for
example,
100
megabytes
memory,
then
the
q
proxy
will
request
50
megabytes.
A
The
number
of
requests
for
different
applications
and
price
compresses
may
vary
greatly,
which
means,
on
the
one
hand,
some
qproxy
doesn't
have
enough
resources
to
handle
incoming
requests,
resulting
in
low
memory
utilization
of
user
container.
On
the
other
hand,
some
q
proxy
may
have
too
much
resources
resulting
in
which
will
result
in
a
low
memory
utilization
of
field
proxy
itself.
A
A
This
method
has
the
following
advantages:
user
container
results
can
be
fully
used
in
the
node
few
proxy
results.
All
mostly
can
be
fully
used.
The
much
less
resource
cost
when
q
proxy
is
in
idle.
Even
if
there
are
many
addo
instances
on
the
node,
that
is
the
ato
means
the
polar
dash
doesn't
precise
any
requests.
A
A
A
A
A
A
With
ebpf
we
hoop,
we
hook
up
our
program
to
circuit
operations
in
the
kernel
which
can
record
sockets
in
a
hashmap
and
redirect
the
package.
According
to
that
map.
When
the
package
arrives
on
the
wholesale,
our
program
will
dispatch
it
strictly
to
its
destination.
This
much
more
direct
solute
will
result
in
lower
latency.
A
No,
that's
all,
and
we
have
some
some
works
to
do
in
the
future.
We
have
submitted
some
some.
We
have
submitted
some
full
requests
to
fake
fake
box
or
to
employment
feature
to
the
kinetic
community
before
and
we
look
and
we
would
like
to
contribute
more
in
the
future.
In
addition,
we
are
trying
to
software
hardware
inner
scenery
to
provide
the
best
quality
efficiency
to
all
users.