►
From YouTube: RISC-V: The Lowest Layer of the Cloud-Native Landscape - Daniel Mangum & Carlos Eduardo de Paula
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon North America 2021 in Los Angeles, CA from October 12-15. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
RISC-V: The Lowest Layer of the Cloud-Native Landscape - Daniel Mangum, Senior Software Engineer, Upbound & Carlos Eduardo de Paula, Cloud Architect, Red Hat
A
Hello
folks
and
welcome
to
kubecon
eu
2021,
my
name
is
daniel
mangum
and
I'm
a
senior
software
engineer
at
upbound
and
I'm
joined
today
by
carlos
eduardo.
Who
is
a
cloud
architect
at
red
hat?
We
hope
you
all
have
had
an
awesome
week
so
far
and
we're
looking
forward
to
diving
into
a
topic.
You
likely
haven't
heard
a
ton
about
this
week
and
may
not
be
familiar
with
at
all.
A
Folks,
often
reference
the
image
as
a
representation
of
how
complicated
and
wide-reaching
the
space
has
become,
while
those
criticisms
are
certainly
valid.
I
believe
it
is
also
important
to
recognize
the
tremendous
innovation
we
are
experiencing
from
infrastructure
management
to
service
meshes
and
everywhere
in
between
individuals
and
organizations.
Have
more
optionality
than
ever
in
designing
a
cloud-native
platform
that
is
highly
tailored
to
their
specific
use
case?
A
Furthermore,
with
the
rise
of
a
default
to
open
source
mindset,
we
have
the
opportunity
to
try
before
we
buy
greatly
reducing
the
pains
of
vendor
lock-in,
which
has
been
a
trademark
attribute
of
the
technology
industry
for
decades.
In
many
ways
we're
in
a
software
renaissance,
but
isn't
there
something
missing
here,
while
we
have
a
robust
open
source
software
ecosystem,
the
platforms
we
design
run
almost
exclusively
on
proprietary
hardware
and
firmware,
and
until
now
this
hasn't
really
been
a
problem.
The
promise
of
the
cloud
is
that
we
don't
have
to
worry
about
the
underlying
machinery.
A
We
simply
interact
with
an
api
and
don't
get
me
wrong.
This
is
a
powerful
model
and
we
will
not
be
suggesting
today
that
every
company
drop
what
they're
doing
and
start
building
out
their
own
foundry
and
developing
custom
silicon.
Until
now,
this
proprietary
hardware
model
has
actually
worked
quite
well.
So
what
makes
today
different
than
the
last
50
years
of
computing,
or
in
other
words,
why
should
I
care?
A
In
1965,
gordon
moore
made
a
prediction
about
the
growth
of
the
number
of
transistors
in
an
integrated
circuit.
His
assertion
was
that
the
number
would
double
every
year
which
he
revised
10
years
later
to
every
two
years.
The
implication
of
this
prediction,
which
did
in
fact
come
to
fruition,
was
that
computer,
programmers
and
system
architects
could
rapidly
improve
performance
of
their
applications
simply
by
upgrading
to
the
newest
hardware,
every
few
years
and
with
the
advent
of
cloud
computing
in
the
mid
to
late
2000s
upgrading.
A
That
hardware
was
as
simple
as
hitting
an
api
endpoint
or
clicking
a
button
in
the
cloud
provider
console
around
the
time
of
moore's
revised
prediction.
Robert
dinard
made
a
related
prognostication
about
transistors,
asserting
in
his
1974
paper
that
as
the
size
of
transistors
shrinks,
the
power
density
remains
constant.
A
A
This
fundamental
truth
has
driven
the
computing
industry
for
many
years,
but
both
denard
scaling
and
moore's
law
are
plateauing
due
to
the
limitations
of
the
physical
world.
For
this
reason,
we
are
seeing
a
movement
to
custom
hardware
for
specific
computational
activities,
frequently
referred
to
as
domain
specific
accelerators.
You
are
likely
already
familiar
with
some
of
these
hardware
categories,
for
example
the
graphical
processing
unit
or
tensor
processing
unit.
However,
both
the
gpu
and
tpu
are
relatively
general
purpose
compared
to
some
of
the
more
focused
domain.
A
A
This
shift
comes
at
a
cost,
though
software
typically
has
to
be
modified
to
take
advantage
of
the
specialized
hardware,
meaning
the
days
of
simply
deploying
your
workloads
to
a
new.
Similarly
priced
machine
and
seeing
drastic
improvements
could
be
coming
to
an
end.
In
short,
hardware
is
going
to
become
more
and
more
heterogeneous,
so
now
that
we
have
sufficiently
buried
the
lead
here,
let's
actually
talk
about
risk
five
risk.
Five
is
an
open
source
instruction
set
architecture,
while
this
may
seem
unsurprising
or
even
expected
by
folks
accustomed
to
the
software
industry
and
open
source.
A
Isa
is
a
start
deviation
from
the
traditional
model
of
the
hardware
industry.
You
may
be
thinking,
aren't
x86
and
arm
open.
We
have
compilers
that
target
them
and
I'm
free
to
write
my
own
assembly
for
them.
That
is
true,
but
they're
not
freely
available,
meaning
you're
not
able
to
implement
your
own
processor
that
uses
the
isa
now
you're,
probably
thinking
I
don't
want
to
implement
my
own
processor.
A
So
what's
all
this
for
you'll
notice
that
we
and
most
folks
you
talk
to
who
are
bullish
on
the
future
risk
5
are
not
under
the
impression
that
all
hardware
needs
to
be
open
source.
In
fact,
many
of
them
are
building
proprietary
companies
based
around
it.
The
value
of
risk
5
is
that
it
is
an
open
interface
of
which
there
are
many
closed
source
and
open
source
implementations.
A
A
useful
comparison
in
the
cloud
native
ecosystem
is
kubernetes
itself.
Many
of
the
companies
sponsoring
this
very
event
provide
kubernetes
distributions
that
have
a
unique
value
proposition
to
customers.
At
this
point
few
end
users
are
actually
installing
and
managing
the
open
source
kubernetes
implementation.
A
However,
the
fact
that
anyone
can
implement
the
kubernetes
api
open
or
closed
is
what
allows
us
to
have
a
landscape.
Like
we
looked
at
earlier,
as
with
kubernetes,
there
will
be,
and
already
is,
countless
risk.
5
implementations
all
adhering
to
a
common
modular
specification
that
allows
implementers
to
cater
to
specific
use
cases
that
can
be
targeted
by
any
tooling
for
kubernetes.
This
tooling
is
operators
for
risk
5,
it's
compilers.
A
On
an
earlier
slide.
I
mentioned
that
there
are
trade-offs
between
open
source
and
proprietary
as
an
industry
and
as
a
community,
we
must
critically
evaluate
whether
open
sourcing
a
project
creates
or
diminishes
value.
For
many
years,
proprietary
isas
have
actually
created
quite
a
lot
of
value.
They've
allowed
for
a
consistent
set
of
targets
for
software
to
run
on
in
some
ways
the
barriers
to
entry
of
the
microprocessor
industry
have
been
a
feature
rather
than
a
bug.
A
If
the
dynamics
of
compute
performance
were
not
fundamentally
changing,
we
might
not
need
an
open
source
isa,
but
the
fact
of
the
matter
is
they
are,
and
this
change
necessitates
a
change
in
how
the
industry
operates.
Hardware
must
become
more
fragmented
to
continue
to
satisfy
our
complex
computing
demands,
but
we
don't
want
to
sacrifice
the
ability
for
software
to
target
a
common
interface.
A
B
Thanks
dan
and
how
is
risk
5
in
the
panorama
of
cloud
applications
and
orchestration,
we
are
already
in
pretty
good
shape.
Kubernetes
already
runs
in
the
risk
five
architecture
and
we
can
even
deploy
some
applications
into
it.
Here
we
can
see
the
sci-5
unmatched
the
first
chris
5,
fully
featured
computer
in
a
pc
form
factor.
It
run
already
runs
linux
mainline
the
board
has
a
quad
core
processor
and
16
gigs
of
ram,
allowing
building
and
developing
applications
for
risk
5
much
easier
in
the
left.
B
B
Then
I
started
building
many
container
images
to
be
able
to
run
kubernetes
and
its
applications
on
risk:
five
open,
fast
traffic,
ingress
controller,
core
dns,
flannel
and
many
more
all
required
to
support
kubernetes
and
running
these
cloud
applications.
I
also
had
to
build
the
base
images
to
run
these.
These
applications
like
debian,
based
image.
They
still
don't
exist
in
the
upstream
repositories,
so
we
have
to
build
on
a
separate
tree.
B
All
these
changes
projects
and
images
are
tracked
in
a
project
that
I
call
risk
five
bring
up
project
that's
hosted
on
my
github
account
I'll
post
the
link
at
the
end
of
the
presentation
for
youtube
to
follow
up
this
news
and
projects
that
have
been
tracking.
A
B
Lot
of
help
from
the
community-
and
you
all
can
help
on
this.
We
have
some
points
that
need
to
be
addressed
and
will
allow
us
to
progress
like
having
official
support
from
linux
distributions,
most
of
them
already
support,
building
their
packages
for
risk.
Five,
almost
at
ninety
percent
of
their
packages,
already
runs
and
builds
on
risk
five,
but
they're
still
not
in
the
main
distribution
branches.
So
we
still
need
to
configure
it
as,
for
example,
unstable
or
experimental.
B
So
once
these
this
distributions
are
already
upstream
and
releasing
their
their
installation
packages
for
for
risk
five.
We
can
have
also
changes
to
the
image
generation,
so
we
can
have
their
main,
for
example,
debian
centos,
fedora
images.
We
can
have
risk
fighting
the
manifests
as
well,
and
that
will
allow
us
to
build
the
many
applications
that
we
need
based
on
official
images.
B
Once
all
these
images
are
upstream,
we
can
start
pushing
new
pr's
for
the
projects
allowing
them
to
be
built
on
their
automated
pipeline
cis
and
binaries
for
risk
five.
So
we
can
have
ri5
as
a
first
class
citizen
in
the
cloud
native
foundation,
now
I'll
show
a
quick
demo
of
me
running
kubernetes
in
the
in
my
wrist
five
pc,
the
sci-fi
ever
matches
and
deploying
a
simple
hello
world
application.
It
seems
trivial,
but
for
an
architecture
that
got
linux,
mainline
and
support
less
than
three
years
ago.
B
It's
quite
a
progress,
and
things
are
progressing
so
fast
that
we
already
have
other
applications
running
on
risk
5
like
node.js
and
many
more.
Thank
you
very
much
and
I
hope
you
enjoy
here
in
the
right.
We
have
two
windows
on
top
my
own
computer,
the
bottom,
the
sci-fi
haven't
matched
it.
Let's
take
a
look
at
our
kubernetes
nodes.
B
Details
we
have
some
containers
running
it's
running
on
risk5
architecture
and
running
version.
1.20.4,
it's
pretty
now.
Let's
take
a
look
at
our
running
pods
yeah.
We
have
the
system
parts
running
and
open
fast.
A
function
is
a
service
platform
running
in
our
onenote
cluster
as
well.
Let's
take
a
look.