►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
cncf
on
demand,
webinar
reduce
the
carbon
footprint
of
your
cloud
native
workloads.
Now
I'm
robert
duffner
from
the
product
team
at
it
renew
today
we
welcome
andy
randle
chief
commercial
officer
at
kinvall
and
eric
rydell
senior
vp
of
engineering
of
I.t
renew
at
the
end
of
the
discussion.
We
will
answer
some
questions,
so
please
stay
with
us
with
that.
I'll
turn
it
over
to
eric
to
get
the
discussion
going.
B
All
right
thanks
thanks
robert
for
the
introduction,
thanks
andy
for
for
joining
us
for
the
the
webinar.
What
we
wanna
talk
to
about
today
is
a
topic.
Maybe
doesn't
come
up
that
often
in
in
cncf
circles,
is
the
interaction
with
the
hardware
that
underlies
all
of
the
the
infrastructure
that
that
cncf
and
the
world
builds
on
from
day
to
day.
B
Luckily,
for
most
people,
you
don't
have
to
worry
about
your
your
hardware,
because
there's
there's
those
of
us
in
our
corners,
the
industry
that
are
taking
care
of
of
what
lies
underneath
and
occasionally
we
just
like
to
surface
a
little
bit
of
of
what
we're
doing
and
our
focus
today,
based
on
some
of
the
the
sesame
products
that
we
have
at
it.
B
Renew
is
focusing
in
this
week
of
of
earth
day
on
the
carbon
footprint
that
that
that
code
provides
or
sorry
that
that
hardware
provides
as
well
as
how
it
interacts
with
the
large-scale
ecosystem.
So
that's
that's
why
we
asked
andy
to
join
us.
B
Long-Time
software
open
source
advocate
and
an
implementer
and
and
so
hopefully
we'll
outline
how
openness
applies
to
hardware
as
well
as
it
does
to
software
and
how
all
of
it
can
be
used
to
to
be
more
efficient,
more
efficient
in
in
the
software
and
the
systems
and
the
applications
and
more
efficient
in
the
way
that
we
provide
and
produce
the
hardware
underneath
so
andy.
B
Do
you
wanna
talk
a
little
bit
about
you,
know
kinvoke
and
the
the
software
infrastructure
so
we'll
start
at
the
top
and
and
work
our
way
down
to
the
hardware.
C
Yeah
absolutely-
and
I
think
you
know
the
first
thing
to
point
out-
is
you
know
when
we're
built
when
we
thought
about
how
do
we
build
this
solution?
We
wanted
it
to
be
open
from
top
to
bottom,
so
it's
an
open
hardware,
architecture
and
an
open
software
architecture,
and
you
know
it
renew
and
kim
folk
have
really
collaborated
get
together
as
a
team
to
deliver
this,
and
that
kind
of
goes
back
to
some
of
the
founding
principles
that
kinfolk
was
established
on
over
five
years
ago.
C
Now,
right
at
the
beginning
of
the
cloud
native
revolution,
we
started
with
a
team
that
had
a
lot
of
expertise
in
linux.
A
lot
of
the
low
layer,
low
level
layers
of
the
of
the
cloud
native
stack,
and
you
know,
built
on
that
with
container
technologies
and
kubernetes
expertise
as
well.
C
The
the
kind
of
values
that
we
set
up
the
company
around
were
all
around
open
source,
so
contributing
cooperating
community
welcoming
and
this
kind
of
embodies
both
how
we
work
with
it
renew
to
deliver
the
open
systems
that
we're
going
to
talk
about
more,
as
well
as
how
we
want
to
work
with
the
users
with
the
customer
community
and
other
vendors
and
partners
out
there.
C
So
you
know
so
that's
kind
of
a
little
bit
of
how
we
at
kinfolk
think
about
things
and-
and
we
take
this
expertise,
we
take
these
values
and
kind
of
the
direction
we're
pushing
in
is
all
around.
B
Beautiful
yeah
thanks
thanks
andy,
so
so,
just
to
briefly
introduce
it
renew,
as
a
company
has
been
around
almost
20
years,
but
sesame
as
a
line
of
rack,
integrated
server,
storage
and
networking
is,
is
a
little
bit
more
than
two
years
old.
B
So
so
we're
we're
a
little
bit
on
the
on
the
startup
phase,
but
building
on
a
on
an
established
base
right
and
and
the
other
thing
that
that
we're
building
on
you
know
similar
to
the
you
know,
the
community
of
of
cncf
and
the
community
of
linux
that
that
so
much
of
our
of
our
software
is
based
on
is
the
the
open
compute
project
right
and
the
the
open
compute
project
is
just
under
10
years
old
was
was
originally
started
by
facebook.
B
There
were
other
hyperscalers
before
you
know,
most
notably
google,
amazon
that
had
started
innovating
in
in
hardware
and
on
their
own
and
then
facebook
and
a
number
of
others.
You
know,
arista
and
and
a
set
of
small
set
of
vendors
were
responsible
for
bringing
that
into
the
open
right
and
saying
hey.
Can
we
do
hardware
innovation
in
the
same
ways
that
we
do
software
innovation?
All
right,
we
already
collaborate
globally
right.
B
The
hardware
industry
is
global
in
its
in
its
implementation:
lots
of
dependencies
lots
of
supply
chain
that
that
we've
seen
recently,
you
know,
certainly
pluses
and
minuses,
but
this
is
how
we've
we've
built
the
industry
right
with
global
and
and
worldwide
collaboration,
but
but
we
was
often
being
done
in
a
in
a
one-on-one
or
a
one.
On
a
few
small
number
of
vendors
right.
I
personally
spent
nine
the
last
10
years
at
at
emc
and
then
dell
after
the
merger.
B
We
did
lots
of
collaborations
with
a
lot
lots
of
vendors
and
and
different
hardware
partners
different
software
partners,
but
it
was
often
done
in
in
the
service
of
our
ultimately
proprietary
platforms
right
and,
and
so
what
open
compute
does
is
it
brings
that
community
explicitly
into
the
open
and
what
you
see
in
the
in
the
visual
that
I
have
up,
there's
a
there's,
a
huge
breadth
of
projects
that,
over
the
last
nine
years,
that
that
community
has
been
able
to
to
bring
together.
B
So
there's
there's
over
a
hundred
active
projects,
nearly
200
projects
all
told,
and
they
span
a
very
wide
dynamic
range
of
of
hardware
and
related
systems
right
all
the
way
from
from
data
center
facilities.
So
this
is
literally
about
the
physical
infrastructure,
the
concrete
and
the
the
pipes
and
the
cooling
and
so
on
of
of
data
center
infrastructure,
where
the
teams
have
been
able
to
innovate
in
in
incredible
ways
to
bring
efficiency
way
down
right.
Any
amount
of
of
electricity
and
water
that
is
wasted,
cooling
or
otherwise.
B
Kind
of
managing
the
data
center
could
better
be
applied
to
the
actual
computing
right
and
so
there's
a
number
of
metrics
there
that
have
been
much
reduced
from
you
know:
30
and
40
percent
overhead
to
sometimes
four
and
five
percent
overheads.
Today
that
data
centers
are
able
to
provide.
B
Similarly
in
terms
of
of
server
innovation,
this
rethinking
again
starting
you
know
nine
years
ago,
sometimes
longer
with
some
of
the
other
hyperscale
vendors
of
what
is
really
necessary
inside
a
server.
What
is
core
and
and
what
is
context
and
and
there's
some
similar
design,
simplifications
that
have
taken
place
over
that
time
frame.
B
Right
that
reduce
the
total
number
of
components
which
makes
the
system
simpler
but
as
a
side
effect,
also
makes
it
more
reliable
if
there's
less
components,
there's
less
components
that
fail
makes
it
more
efficient
and
if
there's
a
smaller
number
of
larger
components,
then
there
can
be
a
greater
mechanical
and
a
greater
electrical
efficiency.
For
example,
we
use
larger
fans
that
move
a
larger
volume
of
air
for
the
same
amount
of
input.
Electrons,
we
use
larger
power
supplies
that
have
less
waste
in
their
conversion.
B
The
final
thing
I
want
to
mention,
while,
while
we
have
this,
this
visual
up
is
you'll
notice
that
open
compute
includes
not
just
hardware
vendors
in
the
list
right
intel,
we
win
quanta
et
cetera,
arista,
but
also
end
user
companies
such
as
facebook,
at
t,
microsoft
and
other
component
vendors.
All
right,
you
see
edge
core
delta
electronics
and
so
we're
really
bringing
to
the
table
the
component
vendors
right.
B
Those
who
design
the
power
supplies
and
know
more
about
the
power
supply
than
than
you
ever
really
wanted
to
know.
The
system
integrators
who
are
bringing
those
together
into
into
operational
systems
and
then
the
end
users
who
are
like
this
is
how
it
actually
operates
in
the
data
center,
and
so
that
has
been
incredibly
powerful
for
us
in
the
hardware.
Space,
as
I
would
assume,
is,
is
very
similar
in
the
in
the
software
space.
Andy.
C
Yeah
absolutely
eric,
I
mean,
I
think,
if
you
look
at
the
velocity
that
the
cloud
native
community
has
moved
at
over
the
last
few
years,
it
wouldn't
have
been
possible
in
a
proprietary
world.
I
mean
it's
all
a
result
of
kind
of
collaborations
that
we
see
between
vendors.
You
know
and
end
users
all
coming
together,
working
in
the
open.
B
Right,
yeah,
absolutely
and
then,
and
just
to
to
nod
to
the
the
worldwide
aspect
of
what
we're
doing
so
so
robert
is,
is
in
california
in
silicon
valley,
I'm
actually
in
a
in
a
seaside
town
south
of
of
boston
and
andy
is
in
in
germany
in
in
berlin
and
the
metropolis
there
so
we're
even
this.
This
webinar
is
in
its
production,
global
and,
of
course
the
audience
will
be.
B
You
know
likely
in
you
know
in
in
every
time
zone
around
the
world
and
and
that's
that's
really
provided
us
a
lot
of
a
lot
of
power
and
and
synchronicity
there
all
right.
So
let
me
let
me
talk
a
little
bit
about
then
the
hardware
footprints
that
that
underlie
what
we're
doing
here
right
so
I've,
given
the
backdrop
of
of
open
compute
designs.
B
So
what
we're
showing
now
in
the
visual
is
is
a
couple
of
the
solutions
from
from
it
renew
and
and
if
you
start
in
the
center,
so
in
the
center,
what
we
have
is
is
an
integrated
sesame,
rack,
storage,
compute
networking
as
it's
ready
to
ship
to
to
one
of
our
customers,
and
so
we
have
a
mix
of
compute
nodes
and
storage
nodes
so
that
different
workloads
can
use
different
aspects
of
the
system.
Right.
B
When
we,
when
we
then
add
the
the
kubernetes
the
container
orchestration
system,
then
customers
are
able
to
get
exactly
the
same
type
of
flexibility
that
they're
used
to.
In
now.
B
The
public
cloud
with
which
has
you
know
exactly
the
same
hardware
underneath
right,
there's,
there's
lots
of
servers
in
in
serverless,
no
matter
how
you
do
it,
what
we're
able
to
provide
at
the
design
level
is
that
type
of
flexibility,
and
so
when
we
work
with
our
customers,
we
ask
how
many
containers
are
you
going
to
run
how
many
cores,
how
much
networking
bandwidth
so
on
and
then
we'll
take
care
of
fitting
that
into
a
rack?
B
As
you
see
in
the
center
here,
there's
a
rack
with
with
about
a
dozen
nodes
and
and
some
high
density
storage
at
the
very
bottom
of
the
rack,
and
then
what
you
see
on
the
right-hand
side
is
a
three-rack
build-out.
That
is
actually
one
part
of
a
build-out
that
we're
doing
together
with
with
a
customer
a
partner
block
heating
in
amsterdam.
B
So
those
three
racks
are
part
of
an
18
rack
footprint
that
that
block
heating
is
using
to
to
manage
a
a
computing
infrastructure,
but
then
they're
doing
a
second
benefit
in
that
they're
using
the
exhaust
heat,
the
waste
heat
from
those
racks
from
the
cpus
in
those
racks
to
heat
up
water,
which
then
heats
up
their
greenhouses,
which
is
shown
in
the
bottom.
B
So
not
only
do
we
use
the
the
hardware
for
cost
efficient
energy
efficient
computing,
but
then
the
the
waste
heat
that
is
generated
right
conservation
of
of
energy.
So
every
electron
that
that
goes
in
as
as
power
has
to
be
exhausted.
As
heat,
we
use
that
heat
again
or
block
heating
uses
that
heat
again
to
to
warm
up
their
greenhouses
and
and
grow
tomatoes.
B
So
we're
really
making
use
of
not
just
we've
taken
the
efficiency
of
the
hardware
to
the
you
know,
the
ultimate
level,
but
we've
also
made
additional
use
in
this
case
of
the
the
energy.
That's
that's
dissipated,
all
right
and
then
finally,
on
the
on
the
left-hand
side
is
a
desk
side
unit.
So
this
is
something
that
has
been
quite
popular
for
for
our
engineers
and
and
for
some
of
our
customers
during
the
pandemic.
B
So
this
is
a
way
to
have
a
four
or
five
node
hyperscale
cluster
under
your
desk,
so
this
unit
plugs
into
a
standard.
110
electrical
outlet
sits
underneath
your
desk
and
has
exactly
the
same
nodes
as
would
be
found
in
the
rack.
So
we
use
this.
Our
engineers
use
this
to
design
and
develop
the
systems,
and
we,
our
customers,
use
this
to
to
do
benchmarking
and
and
pocs.
B
It
also
gives
many
of
our
customers
a
sense
of
what's
possible
in
reimagining
the
footprint
of
computing.
So
there's
a
number
of
designs:
I
won't
go
into
in
this
discussion,
but
for
edge
computing
in
in
all
different
types
of
of
wiring,
closets
or
or
the
corner
of
of
a
mall
or
a
real
estate
scenario,
or
a
manufacturing
facility,
where
it
really
makes
sense
to
not
bring
an
entire
rack,
but
maybe
three
nodes,
four
nodes,
five
nodes,
but
then
there
is
a
hundred
or
a
thousand
locations.
B
So
imagine
a
retail
customer
with
a
thousand
physical
stores.
Each
store
has
three
or
four
servers,
and
they
want
to
treat
that
as
a
3000
to
5000
node
kubernetes
cluster,
because
it
really
is
globally
distributed.
Has
a
global
workload,
needs
kind
of
orchestration
and
monitoring,
just
as
a
data
center
infrastructure
does,
but
now
it's
it's
widely
distributed
and
and
with
our
capabilities
of
of
reimagining
the
hardware
we're
able
to
to
bring
that
to
bear.
B
That
that's
right,
andy
thanks
for
the
for
the
reminder,
the
the
the
crate,
the
box,
in
fact,
comes
with
with
two
cables.
So
the
same,
the
same
power
supply
can
be
can
also
be
used
in
in
220
volt
companies.
It.
It
actually
gives
a
little
bit
more
juice
in
some
cases
when
we
use
it
in
the
racks
all
right.
B
So
so
andy
do
you
want
to
you
want
to
talk
a
bit
about?
You
know
the
analogous
so
so
what
what
I've
tried
to
present
is
is
gonna,
some
of
the
components
and
and
the
details
of
how
we,
you
know,
build
up
a
a
hardware,
a
hardware
stack.
Do
we
want
to
talk
a
bit
about
the
the
software
that
that
then
so
now
now
goes
in
reverse
right.
Let's
go
further
up
the
stack.
C
Yeah
yeah,
of
course-
and
you
know
we
always
each
of
us-
looks
at
it
from
a
different
perspective
so
from
the
software
folks.
It's
just
oh
just
give
us
some
hardware,
that's
the
easy
bit
and
the
hardware
folks
think.
Oh
the
software,
that's
the
easy
bit
that
runs
on
top,
but
it's
kind
of
where
they
come
together.
C
That
actually
is
kind
of
where
the
magic
happens
and
and
that's
what's
so
exciting
about
what
we're
doing
here
and
and-
and
you
see
that
you
know
in
in
this
this
chart
here-
you
know-
it's
all
starts
with
life
cycle
management
right
and
what
is
that
experience
when
you
first
start
to
use
a
sesame
rack?
C
You
know
we
put
a
lot
of
effort
into
working
together
so
that
when
you
get
that
rack
delivered,
everything
is
pre-configured
this
all
the
software
you
need
is
located
on
the
management
node,
so
we
can
deploy
to
the
rack
in
a
matter
of
a
few
minutes.
We
know
what
what
servers
are
there
and
you
know
we
don't
have
to
pull
down
all
of
the
all
of
the
images.
C
So
this
one
enables
us
to
do
literally
a
single
command
to
provision
the
rack,
how
you
want
now
that
you
know
there
may
be
some
configuration
options
you
want
to
set
to
adjust
how
it
integrates
with
your
network
or
things
like
that.
But
essentially
everything
is
there
within
the
rack
and
not
just
a
deploy
time,
but
when
there
are
updates,
the
whole
stack
is
designed
to
be
able
to
take
updates
automatically
deploy
them
into
the
rack.
C
So
that's
a
lot
of
value
right
there,
the
the
next
layer
up
kind
of
from
the
hardware
we
we
build
the
system
around
the
flat
car
container
linux
operating
system
has
a
lot
of
advantages
for
systems
like
this.
So
it's
it's
optimized,
not
just
for
running
containers.
So
it's!
It
is
a
minimal
distro,
so
it
just
has
what
you
need
for
running
containers,
but
we've
also,
you
know,
tested
and
qualified
and
verified
it
on
the
sesame
hardware.
C
So
you
know
that
it's
going
to
work
and
it's
going
to
keep
working,
it's
also
a
very
secure
base
as
well.
So
the
fact
that
you're
running
everything
within
containers
means
that
you
can
think
of
the
operating
system
as
an
immutable
thing
that
only
ever
gets
updated.
When
you
do
a
full
os
update,
you
switch
from
the
base
partition,
that's
currently
running
to
an
update
partition.
C
If
that
upgrade
doesn't
work,
you
switch
back,
but
you
don't
have
to
worry
about
package
management.
You
don't
have
to
worry
about
security
vectors
where
malicious
actors
actually
modify
the
operating
system
on
disk
all
of
that's
protected.
So
it's
a
it's
a
it's,
basically,
a
the
the
best
basis
from
an
os
perspective
for
running
cloud
native
workloads,
then,
on
to
that,
we
deploy
kubernetes,
where
the
the
core
of
the
kubernetes
experience
is
just
vanilla
upstream,
there's
no
special
distro
version
here
with
modified
pieces.
C
This
is
the
open
source
community
kubernetes,
but
on
top
of
that,
then
we
have
a
curated
set
of
components
that
we
deploy
so
for
networking,
for
storage,
for
monitoring,
etc
and
those
components
it's
it's
not
just
that
we
select
the
the
right
components
to
deploy
that
they
all
work
together.
We
test
them.
We
give
you
defaults
out
of
the
box,
so
you
really
have
to
think
about
what
are
all
of
the
configuration
options.
I
need
to
get
these
things
working
together.
C
Part
of
those
components
are
are
for
monitoring
and
telemetry,
so
there's
prometheus
with
dashboards,
and
you
can
see
what's
going
on
from
the
top
of
the
stack
through
to
some
some
of
the
hardware
monitoring
pieces
as
well
all
in
one
dashboard
and
then
at
the
very
top
level.
You
have
a
management
ui
where
there's
a
you
know,
a
clean,
extensible
ui
from
seeing
what's
happening
within
the
cluster
which
nodes
there
are,
what
pods
are
running
and
we're
increasingly
building
this
out
with
more
and
more
capabilities
in
terms
of
what
we
call
systems
intelligence.
C
So,
starting
with
plugins
based
on
a
technology
called
ebpf
to
do
things
like
trace
monitoring
of
syscalls
that
your
applications
are
performing,
so
it
even
stores
these
on
disk
out
of
a
ring
buffer
from
the
kernel.
So,
in
the
event
that
something
crashes
you
can
say
what
was
happening
up
to
the
point
that
crashes
helps
you
diagnose
and
debug
things
happening
in
your
cluster
and
and
also
identifying
where
you
can
enhance
security.
So,
for
example,
defining
network
policies.
C
So
that's
kind
of
the
layers
of
the
stack,
and
I
guess
that
you
know
the
key
point
here
is
everything
here:
is
100
open
source
100
community
driven,
you
know,
there's
no
proprietary
pieces,
we're
trying
to
get
in
and
some
kind
of
layer
here
and
what
kinfolk
brings
is
you
know
both
development,
but
also
curating,
of
this
stack
and
updates
and
making
those
updates
available
in
an
automatic
way,
which
you
know
is,
is
not
just
a
day
zero
and
a
day.
C
One
issue,
but
it's
the
day,
two
and
and
thereafter
and
really
thinking
about
that
full
life
cycle
of
the
experience
you're
gonna
have
with
this
software
deployment.
B
Right
yeah,
andy
and
that
that
that
that
day,
two
and
day
end
right
is
is
really
the
most
important
part
of
of
all
of
this
right
is
that
the
you
know
the
experience
that
that
we've
been
optimizing
together
with
kinvoke,
for
you
know,
60
minutes
from
from
truck
to
workload,
and
so
the
that's
an
analogy.
B
I
often
give
or
a
focus
I
often
give
for
our
customers
that
we
can
deliver
a
rack
on
sort
of
a
week's
lead
time
and
then,
once
it
arrives
on
the
truck,
it's
fully
cabled
fully
integrated,
and
so
we
can
roll
it
right
into
the
data
center,
plug
it
into
the
the
floor
for
power,
the
wall
for
for
networking
and
be
running
workload,
60
or
90
minutes
later
and
and
one
key
to
that
is
also
the
the
pre-qualification
and
the
the
implementation
of
that
software
infrastructure.
B
So
that
there's
not
suddenly
surprises.
Oh
the
the
network
driver
or
this
cable
doesn't
plug
into
this
connector,
which
can
often
lead
to
you
know,
weeks
or
or
more
of
delay,
sort
of
days
and
weeks
of
of
delay,
all
right
and
then
the
the
last
part.
You
know
to
point
out
in
in
that
vein,
is
that
you
know
the
install
time
the
setup
time
you
know
if,
if
it's
60,
minutes
or
or
90
minutes
is,
is
only
a
small
fraction
of
the
of
the
lifetime
of
that
system.
B
Right
that
the
most
important
thing
is
that
it's
it's
going
to
spend
many
years
running
workload
and
as
much
as
we
go
back
and
forth
about
you
know.
Software
versus
hardware
and
all
we've
talked
about
here-
is
infrastructure
level
right
and
the
real.
You
know,
heroes
of
the
real
people
doing
the
work
here
are
the
application
developers
who,
since
we're
taking
care
of
hardware,
infrastructure
software
infrastructure,
they're
able
to
worry
about
you
know
mobile
applications,
web
applications,
databases.
B
You
know
all
the
things
that
that
really
make
make
computing
work
because
we're
taking
care
of
the
the
plumbing
of
of
both
the
hardware
and
the
and
the
software.
Absolutely.
C
Yeah,
I
mean
another
way
of
putting
what
we're
trying
to
do
here,
eric
is
to
say
we're,
trying
to
deliver
an
experience
which
is
as
close
as
possible
to
a
managed
kubernetes
service,
but
you
get
it
on-prem
and
you
own
the
whole
stack
and
it's
open,
and
you
know
that
allows
you
and
your
developers
to
focus
on
what's
running.
On
top
of
that
infrastructure
stack,
you
should
not
be
spending
time
worrying
about
you
know
which,
which
version
of
my
networking
plugin
or
have
I
got
or
which
version
of
my
storage
plugin.
C
B
Right
right
and
then
similar
for
us
right,
so
so
we
also
don't
want
anyone
have
to
worry
about
the
difference
between
rj45
and
sfp
and
qsfp
and
and
all
the
other
protocols
that
are
and
and
standards
that
are
at
the
hardware
level,
we'll
we'll
take
care
of
that
in
our
in
our
qualification
labs
and
in
our
production
facilities.
B
All
right.
So
then,
the
the
last
piece
that
I
want
to
get
to
is
is
what
we
advertise
at
the
front,
so
the
carbon
footprint
right,
and
so
so
the
next
visual
that
I
put
up
is
is
specific
to
the
the
carbon
footprint
of
of
this
equipment
right
and,
and
we've
now
laid
in
all
the
pieces
that
that
make
this
this
picture
relevant
right.
B
So
what
we've
done
in
our
hardware
designs
and
with
the
open
compute
community
is
we've
made
the
systems
when
they're
running
as
efficient
as
possible,
right
low
pue,
high
density,
high
translation
of
of
electrons
into
useful
work
right
we've
made
the
the
software
infrastructure
as
efficient
as
it
can
be
right,
streamlined
operating
system
containers,
orchestration,
monitoring,
looking
for
you
know,
things
that
are
that
are
out
of
whack
or
using
a
an
unnecessary
set
of
resources,
kind
of
optimized
resources
right.
B
So
so
the
infrastructure
has
done
all
it
can
to
to
make
the
system
as
efficient
as
possible,
remove
waste
if
you
will
waste
in
the
hardware
design
waste
in
the
in
the
software
layers
right,
but
at
the
end
of
the
day,
we're
still
going
to
have
a
carbon
footprint
and
and
then
there's
one
more
aspect
that
that
the
it
renew
approach
the
sesame
approach
brings
to
the
picture
which
is
illustrated
here
and,
and
so
what
we're
showing
in
this
chart
is
on,
on
the
left
hand,
side
a
sense
of
in
the
past
year
the
total
number
of
of
new
servers
that
were
delivered
in
in
in
2020
right.
B
46
million
servers,
which
can
constitute
over
three
million
tons
of
t
of
co2
equivalents
and
over
six
million
cars
that
were
added
to
the
road
right
and
and,
as
we
know,
you
know,
the
pandemic
caused
an
increase
in
in
computing
demand
and
so
also
an
increase
in
in
computing
production
right
to
the
extent
that
we
have
delays
and
and
so
on,
hiccups
in
the
supply
chain.
B
But
the
the
numbers
for
for
2021
will
be
even
significantly
larger
than
this
right,
so
very
large,
carbon
footprint
right
and
so
what
we're
showing
on
the
right
hand,
side
is
a
a
nine
year,
co2,
equivalent
comparison
of
a
traditional
model
on
the
on
the
right
hand,
side
with
the
big
blue
bar
and
the
big
orange
bar
versus
a
sesame
model,
and
what
we're
doing
is
we're
saving
in
two
ways
right
and
one
we're
reducing
the
operational
footprint
and
that's
the
efficiencies
that
we've
just
been
talking
about
right:
the
efficiencies
of
of
open
compute
design,
the
efficiencies
in
the
data
center,
some
efficiencies
in
software
and
so
on.
B
Right.
So
the
operational
phase,
the
power
that
is
used
by
the
systems
while
they
are
running,
is
the
orange
bar
and
and
that's
being
reduced
by
being
more
efficient
in
the
way
that
we
design
and
and
build
the
systems
right,
but
that
still
leaves
the
the
the
energy
that's
being
expended
in
what
is
here
called
the
pre-use
phase.
So
what
we
were
also
referred
to
as
the
embodied
energy
of
those
servers
right.
So
all
of
those
servers
all
the
components,
the
highly
integrated
cpus,
the
integrated
memories.
Networking
interfaces
super
dense.
B
Super
complicated
technology
has
to
be
fabricated,
with
processes
that
are
both
energy
intensive
water
intensive
in
a
lot
of
cases,
not
people-intensive,
because
we've
automated
a
lot
of
it,
but
the
robots
again
need
to
be
fed
with
with
electrons
and
and
water
right,
and
so
what
we
show
on
the
right-hand
side
in
kind
of
the
standard
today
system
is
we
show
a
customer
that
refreshes
their
hardware
every
three
years,
and
so
that
means
there
is
a
a
pre-use
burden,
an
embodied
energy
burden.
Every
time
they
install
new
hardware
in
the
system.
B
So
what
we
do
with
it
renew
is:
for
the
last
18
years,
we've
been
in
the
business
of
decommissioning
data
center
equipment
and
helping
companies
like
the
largest
hyper
scalers,
including
microsoft,
google,
facebook,
uber
dropbox
kind
of
a
general
who's
who
of
of
the
hyperscale
business,
and
we
help
them
extend
the
life
of
their
equipment
and
then,
when
they
are
done
with
it,
then
we
create
secondary
and
and
sometimes
third
uses
for
that
equipment
to
reduce
the
blue
bars.
So
we're
squeezing
out
the
pre
use
by
not
building
new
servers.
B
So
if
we're
able
to
to
take
a
set
of
servers
that
are
coming
off
the
line
from
an
existing
hyperscale
customer
and
extend
their
life
years,
four
through
six
off
in
the
sweet
spot,
but
years
seven
through
nine
can
also
be
be
helpful.
To
reducing
the
footprint,
then
we're
able
to
to
do
a
massive
savings
that
is
additive
to
the
savings
that
we
do
in
the
orange
bar
right.
So
the
orange
bar
savings
remain,
if
we're
more
efficient
with
the
electrons
more
efficient,
with
with
the
infrastructure
more
efficient
with
with
the
applications.
B
Of
course,
all
right
also
think
of
the
the
algorithms
right
o
log
n,
rather
than
o
of
n.
It's
still
important
ultimately,
but
we're
able
to
provide
an
efficiency
at
the
at
the
hardware
layer.
So
that
is
a
is
a
super
powerful
kind
of
innovation
that
that
we're
bringing
to
the
marketplace
for
our
customers.
Today,.
B
Andy
any
any
comment,
terry
to
to
wrap
us
up
or
about
the
the
footprint
on
the
software.
C
I
mean
I,
I
think
it's
I
I
think
it's
it's
increasingly
a
concern
for
companies
across
every
industry
like
what
is
their,
what
is
their
environmental
footprint
and,
at
the
same
time
digitalization,
is
increasing
and
is
increasing
concern
across
every
industry.
So
it's
kind
of
logical
that
they
come
together.
I
think
you
know
the
other
thing
that
intersects
with
this
is
is
cloud
usage
as
well,
and
if
you
know
one
of
the
things
we're
trying
to
do
here
is
to
enable
customers
to
basically
mix
and
match
where
they
put
their
workloads.
C
You
know,
in
some
cases,
it'll
make
sense
to
put
workloads
in
cloud
and
that
might
be
energy
efficient
in
some
scenarios,
but
you
can
put
workloads
on-prem
in
a
data
center
that
you
rent
using
this
kind
of
solution
and
get
you
know,
get
better
in
many
ways:
environmental
footprint
and
also
better
from
much
better
from
a
cost
perspective
as
well.
So
what
this
does
is
it
allows
you.
It
is
another
kind
of
tool
in
your
toolbox
to
reduce
the
environmental
footprint
and
you
know,
cost
optimize.
At
the
same
time,.
B
Right,
yeah,
absolutely
andy-
I
I
didn't
you
know
I
didn't
mention
this
up
front,
but
probably
for
the
the
sesame
product
line.
At
the
moment,
almost
50
of
our
customers
are
cloud
service
providers
themselves
right.
So
the
end
users
that
are
creating
the
applications.
You
know
running
the
the
servers
and
systems
they're
renting
this,
this
infrastructure
from
the
cloud
service
providers
that
are
our
customers
right.
B
We
also
work
with
with
some
of
the
hyperscalers
to
to
self
self
recertify
to
self
extend
life,
extend
their
their
hardware
so
that
the
the
footprint
reduction
that
I'm
talking
about
and
the
innovations
in
the
hardware
space
are
accessible
to
to
everyone,
so
so
they're
accessible
to
a
small
business,
a
medium
enterprise,
a
large
enterprise
that
that
already
has
an
on-premise
footprint
for
whatever
reason,
as
well
as
service
providers
of
of
all
shapes
and
sizes
around
the
world.
C
Yeah
and
one
of
the
things
that
having
kubernetes
based
cloud
native
software
infrastructure
here
allows
you
to
do
is
to
do
that
workload
placement
either
within
you
know
this
rack
or
another
rack,
and
you
know
on-prem
in
the
cloud
a
mix
and
match
where
you
do
that
workload.
Placement
for
maximum
efficiency.
B
Yeah
yeah
absolutely
yep,
so
so,
hopefully,
we've
we've
characterized
kind
of
the
the
hardware
interaction,
the
the
software
interaction.
It's
a
solution
that
that
the
the
two
companies
have
worked
on
together,
we're
happy
to
to
make
that
available
to
the
marketplace
so
happy
to
to
find
us
for
additional
details.
Robert,
do
we
have
any
any
questions
based
on
our
our
session
so
far,.
A
Thanks
eric
well,
we
have
some
questions
here.
I
guess
we'll
start
with
you
so
eric
do
you
expect
to
see
the
traditional
hardware
vendors,
joining
the
open,
compute
project.
B
Oh
absolutely
robert,
so
so
as
I
as
I
alluded
to
you
know,
the
opening
pew
project
is
almost
10
years
old
and
and
we've
seen
participation
across
the
board.
So
you
know
hp
dell
other
what
you
would
term.
Traditional
vendors
have
been
active
participants
and
and
throughout
the
the
system
right
they
they
may
not
participate
in
all
the
tracks.
B
But
you
know
different
vendors
decide
on
on
different
things
that
they're
that
they're
sharing
and
that
they're
interested
in,
but
but
this
the
the
benefits
of
the
community
have
have
really
accrued
even
to
you
know,
to
traditional
players
right.
B
One
way
that
you
see
it
very
specifically
is
there's
now
a
standard
for
an
ocp
networking
card
that
is
used
in
in
servers
kind
of
across
the
industry,
because
it
was
something
that
was
just
a
kind
of
a
no-brainer
for
for
everyone
to
do,
but
absolutely
the
the
innovations
have
have
also
folded
into
to
what
would
otherwise
be
considered
proprietary
products
just
like
it
has
with
with
linux
and
and
containers,
and
so
on.
A
Okay,
next
questions
for
andy
andy:
how
do
you
manage
updates
to
the
cloud
native
stack,
specifically
your
flat
car
container
linux
and
your
your
kubernetes
engine.
C
How
do
we
do
this
in
a
secure
way
and
a
way
that
you
know
works
operationally,
because
people
want
to
have
control
over
updates
and
when
they're
applied,
so
we
we
actually
have
a
product,
an
open,
it's
an
open
source
project
which
is
our
update
server
and
that
that
basically
allows
each
of
the
hosts
within
the
rack
to
query
what
is
the
latest
version
of
the
os
and
you
can
apply
policies
on
that
update
server
as
to
how
fast
and
how
automatically
you
want
those
hosts
to
update
and
then
that's
coordinated
between
the
os
and
the
kubernetes
layer.
C
So
when
a
new
os
version
goes
out,
which
is
fairly
frequently
because
there
are
security
updates
from
you
know,
time
to
time,
that's
available,
then
on
the
management
node
and
then
each
of
each
of
the
hosts
within
the
rack
based
on
the
policies
you
define
for
it,
we'll
just
pick
it
up.
It
winds
down
the
the
workloads
that
are
running
on
it
so
because
they're
containers,
you
can
do
that.
It's
called
coordinating
and
draining
the
node
in
kubernetes
speak
apply.
C
The
new
os
update
and
reboot
the
kubernetes
layer
updates
and
the
component
layer
updates
are
somewhat
similar.
You
know,
there's
it
can
be
detected,
you
can
go
into
the
ui,
it'll
say:
hey,
this
version
is
out
of
date,
there's
a
new
one.
Do
you
want
to
apply
or
you
can
do
it
via
a
command
line
as
well?
So
a
lot
of
flexibility-
and
you
know
we
try
and
automate
as
much
as
possible.
A
Okay,
I
guess
this
next
question
I'll
I'll,
throw
both
eric
to
you
and
to
andy.
What
are
the
key
drivers?
Your
customers
consider
when
choosing
to
deploy
their
own
hardware
infrastructure
versus
going
straight
to
one
of
the
public
cloud
service
providers.
B
Yeah
robert,
when,
when
we
address
this,
I
think
the
the
the
straightforward
thing
right.
So
I
I've
been
working
in
in
high-scale
computing
for
my
entire
career
and
and
I've
watched
the
evolution
of
of
the
public
clouds
over
the
last
15
years.
The
the
real
truth
of
the
matter
is
that
what
the
public
cloud
has
allowed
customers
to
do
is
separate
out
the
different
sets
of
concern.
So
we
have
customers
that
do
all
possible
models
that
you
could
imagine
between.
B
You
know
on-premise
infrastructure,
with
proprietary
software
to
public
cloud
with
with
shared
software,
so
we
have
customers
that
own
their
facilities,
but
want
to
to
lease
the
hardware.
We
have
customers
that.
B
Buy
their
hardware
and
use
managed
services
for
the
infrastructure
layers,
so
they
pay
a
company
a
monthly
fee
in
order
to
manage
their
their
software
infrastructure.
And
so
then
they
they
pay
only
their
developers.
They
don't
pay
an
infrastructure
company.
So
what
what
cloud
has
really
allowed?
Customers
to
do
is
a
la
carte
which
pieces
are
are
kind
of
critical
for
them
and
which
pieces
can
be
outsourced
and
converted
into
a
monthly
fee,
and
so
the
public
cloud
converts
it
into
an
all-in
monthly
fee.
B
B
The
important
thing
is
something
andy
said
in
the
main
part
of
the
session,
which
is,
I
design
my
applications
as
a
set
of
of
services
or
microservices
a
set
of
infrastructure
to
a
common
set
of
of
apis,
a
common
set
of
paradigms,
and
then
I'm
able
to
run
that
application
on
top
of
whichever
infrastructure
makes
the
most
sense
and
I'm
able
to
float
that
infrastructure
on
top
of
whatever
hardware
makes
sense,
and
so
we
have
customers
that
are
20
public
cloud.
B
We
have
customers
that
are
80
public
cloud,
but
there
is
whenever
there
is
a
need
for
on-premise
hardware,
then
it
renews
can
step
in
and
of
course,
the
the
software
stacks
like
kubernetes
and
and
flat
car
and
locomotive
are
applicable,
whether
it's
on-prem
or
public
andy.
Do
you
want
to
add
to.
C
That
yeah
yeah!
No,
I
I
think
it's
a
great
answer.
I
mean
the
the
this
idea.
That
cloud
is
both
a
you
know.
Both
a
pricing
model
as
well
as
a
separation
of
operational
concerns,
that's
kind
of
the
innovation
that
it
brought
and
a
lot
of
those
things
can
be
applied
into
on-prem
as
well,
and
that's
one
of
the
things
which
that
we've
been
trying
to
do
is
to
enable
that
operational
concern.
C
As
we
were
saying
in
the
main
session
yeah,
we
don't
want
you
to
worry
about
spending
a
lot
of
time
managing
the
infrastructure.
Now,
if
I
can,
if
I
can
take
away
a
lot
of
that
effort
from
you,
maybe
we
can
make
it
as
easy
almost
as
easy
to
have
hardware
on-prem
as
it
is
to
consume
just
virtual
machines.
A
Okay,
couple
more
questions:
eric
can
you
comment
on
the
typical
time
frames
for
the
hardware
operational
phase
you
see
with
your
customers.
B
Yeah,
absolutely
robert,
so
so
the
the
most
important
thing
is.
There
is
no
such
thing
as
typical,
so
we
we
absolutely
have
customers
that
are
that
are
focused
on
you
know:
improving
their
infrastructure,
increasing
their
infrastructure
kind
of
getting
squeezing
out
a
last
bit
of
efficiency,
most
recently,
for
example,
in
in
ai
type
of
workloads,
where
there
really
is
still
a
moore's
law
of
of
innovation.
B
Right
where
a
piece
of
hardware
that
was
just
announced
last
week
at
at
a
gpu
event
is
a
factor
of
two
and
and
three
more
powerful
than
than
the
system
just
18
months
ago
right.
B
But
there
is
also
kind
of
a
typical
heavy
weight
of
or
heavy
center
centroid
of
computing,
where,
where
moore's
law
isn't
advancing
as
as
efficiently
as
it
was,
and
then
there's
also
a
set
of
of
what
are
often
considered
secondary
use
cases
in
in
storage
and
and
other
aspects
of
of
the
infrastructure
where,
where
the
the
laws
have
have
always
been
different
right.
So
so,
kreider's
law
in
in
storage
was
always
a
very
different
one
than
than
moore's
law.
B
And
so,
as
a
result,
when
once
you
think
about
which
workload
is
being
applied
to
to
which
hardware
and
and
what
are
the
the
life
cycles
of
the
various
hardware,
you
find
that
there
is
no
clean,
clean
distinction.
There's
there
are
some
drivers
at
three
years
because
of
financial
reasons.
B
But
when
you
look
at
you
know,
there
are
customers
that
will
have
storage
systems
in
place
for
six
seven
eight
years,
because
that's
that's
where
the
data
lives
and
and
the
data,
maybe
maybe
isn't
growing,
or
maybe
it
is
growing
and
they're
very
comfortable
with
those
systems
and
apis
and
on
the
other
extreme.
There
are,
you
know:
compute,
maybe
gpu
type
workloads
where
there's
a
turnover
in
in
16
or
18
months,
right
and
then
in
between
maybe
is
is
networking
right.
B
So
there
we
still
have
customers
today,
where
we're
bringing
them
from
the
the
one
gigabit
era
into
25
gigabit
right
and
at
the
other
end
of
the
spectrum.
We
have
customers
that
are
starting
to
deploy,
200,
gigabit
and
400
gigabit
solutions
right.
So
the
most
important
thing,
robert
and
and
for
for
customer
for
folks
on
the
on
the
webinar
to
hear
is
there
is
no
typical
and
what
what
the
systems
that
we've
described.
B
Allow
you
to
optimize
within
whatever
your
time
frames
are.
If,
if
your
time
frame
is
three
years,
if
your
time
frame
is
two
years,
if
your
time
frame
is
seven
or
eight
years,
the
important
thing
is
that
that
we
can
look
at
the
the
optimization
across
workloads
across
on-premise
across
public
cloud,
as
as
andy's
has
alluded
to
and
get
the
most
efficiency
across
the
board
right.
And
it's
all
enabled,
because
we're
collaborating
and
sharing
on
the
the
lingua
franca
of
the
the
apis
and
the
workloads
and
the
deployments.
A
Last
question:
I
guess
I'll
throw
to
both
of
you
are
you?
Are
you
seeing
a
changing
mindset
with
regards
to
how
organizations
are
addressing
sustainability.
C
Sure
yeah
I
mean,
I
think
it's
it's
definitely
coming
front
and
center.
You
know-
and
I
interestingly
so
I
moved
from
the
states
to
europe
two
years
ago-
and
you
know
the
us
two
years
ago
was
not
necessarily
at
the
forefront
of
concern
about
climate
change
and
moving
into
europe.
It
definitely
you
know
you
saw
a
lot
more
political.
C
You
know
political
tailwind
for
pushing
for
climate
change
regulation
and
I
think
you
know
companies
are
aware
of
that,
and
certainly
you
know
having
an
eye
on
what
regulation
is
coming
as
much
as
anything.
Actually,
I
think,
what's
you're
seeing
generational
change
pushing
it,
you
know
companies
have
people
coming
into
the
workforce
who
they
want.
You
know
the
new
employees
want
to
work
for
companies
that
are
doing
good
for
the
world
good
for
the
environment
and
you
know
for
the
millennial
generation.
C
These
are
important
topics
and
we,
I
see
companies
responding
to
you
know
demands
from
from
their
employees
from
their
communities
to
be
more
proactive
in
these
areas,
and
you
see
this
with.
For
example,
you
know
the
the
net
zero,
the
net,
zero
commitments
by
companies
like
microsoft,
by
amazon
and
also
in
european
countries
which
are
committing
to
have
zero
traditional
fuel
vehicles
by
2030.
C
You
know
these
are
the
kind
of
things
that
are
really
pushing
forward
environmental
awareness.
So,
having
said
that,
I
mean,
I
think
it.
It
is
just
one
of
many
factors.
Still
overall
cost
is,
you
know,
is
still
the
key
driver
for
compute
infrastructure,
but
from
what
I
see,
I
think
you
know
the
we
can
make
this
a
win-win
right.
If
we're
doing
well
by
the
environment,
we
can
also
make
make
solutions
that
cost
less,
and
you
know
those
two.
C
If
we
can
make
them
go
hand
in
hand,
then
it's
it's
good
for
everyone.
B
Right
yeah,
so
so
speaking
from
my
perspective,
I
think
the
first
perspective
I
should
bring
so
so
speaking
as
an
engineer
right.
The
the
politics
of
sustainability
is
only
catching
up
with
the
the
scientific
reality
right.
So
the
if
we
are
inefficient
in
our
use
of
of
the
resources,
then
we
will
use
more
resources
right
and,
and
that's
been
kind
of
a
a
focus
of
of
mine
in
in
my
entire
engineering
career
right.
We
always
try
to
create
the
the
most
efficient
solution
to
a
particular
problem.
B
Right
and-
and
so
that's
how
I've
approached
you
know
the
designs
that
we
do
in
ocp,
the
designs
that
we
use
for
power,
consumption
and
so
on,
don't
waste
the
resources
if
we
don't
have
to
right,
get
a
given
amount
of
work
for
for
less
kind
of
inputs,
whether
those
inputs
are
you
know,
electrons
in
energy
or
rare
earth,
metals
or
and
etc
water
in
some
of
the
processes
right.
B
So
so,
as
an
engineer,
I've
always
been
up
focused
on
optimizing
the
processes
and
and
doing
the
same
with
with
less
less
input
materials
right.
So,
in
some
sense,
that's
really
we're
really
just
bringing
all
that
together
at
the
level
of
iraq
or
at
the
level
of
of
a
data
center
right,
and
that
said,
as
as
andy
said,
there
is
certainly
a
drive
with
with
employees
at
companies,
but
also
companies
that
are
just
now
do
starting
to
do
the
accounting.
B
What
is
our
footprint
and
as
soon
as
you
you
look
at
the
numbers,
then
you
realize
hey.
Our
footprint
could
be
smaller
right
where,
where
is
the
waste?
Where
is
the
the
return
on
investment
for
reducing
that
waste?
And
that's
the
most
important
thing
in
in
our
solution
is
that
when
a
customer
buys
into
a
sesame
rack,
it
is
more
efficient
without
compromise.
B
So
it's
not
oh
you're
paying
more
for
the
system,
but
it's
sustainable
on
the
back
end,
it's
actually
paying
less,
so
our
customers
typically
pay
between
a
30
to
50
percent
less
than
than
using
a
traditional
solution,
and
it
has
the
sustainability
benefit.
So
we
think,
with
with
those
kinds
of
innovations
that
are,
as
andy
said,
a
win-win
or
a
no-brainer.
B
A
Okay,
that's
a
wrap!
Thank
you
eric
and
andy,
a
big
thanks
to
the
good
folks
at
cncf,
and
thank
you
for
joining
us
on
this
webinar.