►
From YouTube: Deploying K3s at the Edge for Multiplayer Gaming
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
all
welcome
to
this
webinar
and
thanks
for
attending
today,
I'm
going
to
talk
about
edge
computing,
how
it
is
important
for
multiplayer
gaming
and
then
I'm
going
to
show
a
solution
to
deploy
multiplayer
game
server
at
the
age
based
on
different
open
source
technologies.
A
A
I
will
also
talk
about
some
tools
by
open
nebula
that
allow
us
to
deploy
resources
on
the
edge.
So
let
me
introduce
myself
a
bit.
I
work
at
super
nebula,
that
is
a
company
based
in
madrid
that
focuses
on
building
open
the
solution
for
age
and
the
cloud
computing
I
work
remotely
from
leche
is
in
the
south
of
italy
as
a
cloud
technical
evangelist,
so
I
engage
with
the
community.
I
talk
at
events
webinars
to
showcase
open
source
technologies
that
are
related
to
cloud
computing,
edge,
computing,
kubernetes
containers
and
and
so
on.
A
Okay,
so,
let's
move
to
edge
computing.
Now
it
is
not
just
a
password.
Is
a
new
real
paradigm
computing
paradigm,
and
that
requires
some
technological
challenges,
because
what
we
need
to
do
is
to
bring
the
computation
data
storage
closer
to
the
location
where
it
is
needed
in
order
to
improve
response
times
to
save
boundary,
to
reduce
data
transfer.
So
energy
computing
plays
an
important
role
in
this
in
different
sectors
like
gaming.
A
So
this
is
what
we
are
going
to
talk
about
today
and
but
also
broadcasting
streaming
internetworld
things:
smart
cities
and
visual
desktop
infrastructure,
so
energy
computing
is
important
from
up
for
applications
that
require
low
or
low
latency.
It
will
require
high
bandwidth,
fast
response,
real-time
analytics-
and
these
are
the
main
benefits
that
we
can
get
by
using
this
new
paradigm.
A
If
we
look
at
multiplayer
gaming
edge
computing
can
be
a
game
changer
today,
multiplayer
game
online
gaming
represents
a
big
percentage.
You
know
of
the
entertainment
in
general,
and
some
types
of
games
are
very
popular,
like
first
person
shooter
like
destiny,
2
or
call
of
duty.
We
have
multiplayer
online
battle
arena
like
league
of
legends,
dota,
2,
and
also
very
popular
battle.
Royale
games
such
as
fortnite
or
apex
legends.
So
all
those
games
are
played
worldwide
by
millions
of
players
so
how
they
works.
A
So
when
a
game
starts,
all
players
connect
to
these
dedicated
game
servers
and
they
send
information
during
the
wall
game
to
the
game,
server
that
will
process
all
the
actions
that
are
coming
from
the
player.
So
if
the
player
is
jumping
is
running
is
shooting
all.
This
information
is
sent
to
the
game
server.
That
then,
will
run
the
full
simulation,
the
physics
by
taking
into
account
all
the
player
actions
and
then
the
server
transmits
data
about
the
game
state
to
the
client.
A
So
each
client
will
have
the
their
own
accurate
version
of
the
game
state
to
be
displayed
to
be
rendered
by
the
gpu
client.
Okay,
so
edge
computing
is
important
for
multiplayer
online
gaming,
because
in
the
case
of
these
fast-paced
games,
we
need
to
lower
latency
as
much
as
possible.
So
in
order
to
provide
a
satisfying
gameplay
to
the
players
now
we
cannot
use
an
approach
based
on
a
central
data
center
where
all
game
servers
are
deployed,
because
this
will
increase.
A
Latency
will
decrease
game
response
time
and
also
the
perceived
game,
quality,
and
so
players
will
not
tolerate
this
kind
of
service
instead
by
using
an
edge
computing
paradigm,
we
can
improve
the
gaming
experience
why,
by
provisioning
game
servers
as
close
as
possible
to
the
pool
of
users
that
participates
to
the
game,
and
then
we
so
we
can
drastically
reduce
latency.
We
can
transmit
data
faster
than
using
a
large
centralized
data
center
where
all
game
servers
are
deployed.
A
A
A
So
we
have
to
take
into
account
all
of
this.
So
for
multiplayer
online
gaming.
We
have
to
take
into
account
these
dynamically
resources
that
must
be
created
and
deleted,
and
also
we
have
to
take
into
account
latency
now,
in
order
to
set
up
an
edge
computing
solution
for
multiplayer
game.
We
are
going
to
use
several
technologies.
A
So,
let's
start
from
agonist
agonist
is
a
an
open
source
platform
for
deploying
scaling
and
orchestrating
game
servers
now
for
multiplayer
games,
and
this
has
been
built
on
top
of
kubernetes,
so
agonists
extend
kubernetes,
so
you
can
use
standard,
kubernetes,
tooling,
and
apis
like
cubicity
to
create
rail,
run,
manage
and
scale
dedicated
game
server.
A
The
second
technology
that
we
are
going
to
look
is
the
k3s,
so
get
reac
is
an
official
now
cncf
sandbox
project
is
a
certified
kubernetes
distribution
and
is
ideal
for
age
deployments
because
is
packaged
as
a
single
binary.
Less
than
40
megabytes
so
is
fast.
Provisioning
comes
with
minimal
to
know
to
know
operating
system
dependencies.
A
It
also
can
run
on
several
process
or
architecture
from
intel
x86
to
a
to
arm
architecture.
So
ktvs
is
two
components
as
the
server
that
is
in
charge
of
managing
the
cluster,
deploying
container
sports
and
the
dk3s
agent.
That
is
the
function
as
a
worker
you
know,
is
so
in
charge
of
running
and
executing
poles.
A
Okay.
So
now,
let's
look
at
the
solution
that
we
are
going
to
show
also
with
the
demo
in
a
few
minutes,
so
we
are
going
to
use
a
couple
of
tools
also
developed
by
the
nebula.
One
is
called
one
provision,
so
one
provision
is
a
tool
that
allows
to
dynamically
grow
a
cloud
infrastructure
so
with
the
physical
resources
that
runs
on
remote
cloud
providers,
and
so
we
can
create
resources,
for
example,
on
equinix
metal,
on
aws
on
other
cloud
providers
and
with
one
provision
we
can
deploy
on
these
cloud
providers
a
fully
functional,
opennebula
cluster.
A
So
with
the
computing
with
the
storage
with
the
networking
resources,
and
this
cluster
will
be
managed
by
using
the
open
nebula
computing
resources
will
be
configured
in
by
using
firecracker.
That
is
an
open
source
solution
by
amazon
web
services
and
that
has
been
integrated
in
open,
nebula
to
create
and
manage
secure
and
multitain
and
container
based
services
and
applications.
A
Then,
with
another
tool
from
up
enabler
that
is
called
one
one
flow.
We
are
going
to
deploy
catrice
clusters
on
resources
provisioned
by
one
provision,
so
ktris
clusters
are
deployed
as
firecracker
micro
vms,
and
we
have
to
take
into
account
that
firecracker
micro
vms
is
a
very
fast
startup
time
and
a
very
low
memory
over
the
red.
A
With
respect
to
traditional
virtual
machine
in
order
to
deploy
k3s
cluster,
we
start
from
a
k3s
docker
image
that
has
been
built
with
a
docker
file,
and
then
we
can
define
a
service
template
that
will
be
instantiated
to
deploy
a
catrice
clusters
on
edge
resources.
A
So
whenever,
when
the
cluster
is
deployed,
we
can
use
a
standard,
kubernetes
tool
like
cubectl
and
we
can
deploy
agonists
on
in
the
clusters
and
then
we
can
still
use
qbctl
by
deploying
a
game
server
within
agonists
and,
for
example,
in
the
demo,
we
are
going
to
use
sonotic.
That
is
a
open
source
and
the
free
first
person
shooter,
and
we
are
going
to
use
then
the
exonotic
game
client
to
connect
to
the
game
server.
Okay,
so
now,
let's
see
a
demo,
so
I
will
show
how
this
solution
works
and
for
multiplayer
gaming.
A
So
I'm
going
to
the
console
or
equinix
method
here
I
defined
created
the
project.
If
I
go
here,
this
is
the
project
id.
So
we
can
copy
the
project
t
and
put
here,
and
then
I
defined
also
an
api
key
for
the
demo.
So
I
can
copy
this
one,
and
I
here
okay,
now
we
can
finish
so
here
the
provider
has
been
configured
when
the
provider
has
been
completed.
We
can
provision
resources
that
provide
so
let's
go
and
provision
resources
on
one
packet
by
using
firecracker
as
an
advisor
technology.
A
So
we
select
here
the
provider
in
amsterdam.
Here
we
can
put
the
name,
for
example,
ms1,
and
then
we
can
configure
the
inputs.
So
the
number
of
hosts
that
you
would
like
to
create
the
number
of
public
ip,
for
example,
set
four
here.
You
can
select
the
different
size
for
the
resources
for
the
server,
and
here
you
can
choose
the,
for
example,
the
operating
system.
A
So,
let's
finish
this
and
now
the
provision
will
start
one
provision,
use
some
telephone
to
create
resources
and
then
we'll
use
ansible
to
configure
the
house
and
we
will
going
to
create
computing
resources
so
in
this
case
two
servers
storage,
so
data
stores
to
for
the
vm
and
then
the
networks,
the
public
networks
with
the
for
public
ip
that
we
can
use.
A
Okay.
Now,
let's
go
to
the
open,
nebula
sandstone.
This
is
the
graphical
user
interface
of
open
nebula.
I
will
show
you
in
the
infrastructure.
Now
there
is
a
cluster
that
has
been
created
by
one
provision.
I
call
it
packet
ms1
here
you
can
see.
Also
the
host
that
has
been
is
going
to
be
provisioning
also
to
show
you
in
the
project
on
equities.
A
Then,
let's
see
how
we
can
deploy
the
kubernetes
clusters
one
once
we
have
a
cluster
available.
So,
first
of
all,
what
we
have
to
do
is
to
create
an
image,
a
docker
image
and
import
in
the
data
source
in
open
nebula.
Here
I
already
imported
a
catrice
image.
I
will
show
you
how
you
can
create
an
image
starting
from
a
docker
file,
for
example.
A
So
let's
call
this
address
new,
and
here
we
can
define
the
sides
and
then
here
I
have
a
docker
file
that
you
can
use
to
build
an
image
containing
ktris
now,
which
I
will
download
from
the
github
repository,
and
here
I'm
going
to
download
the
version.
1
17,
okay,
so
once
you
click
create.
What
is
is
happening
here
is
that
is
going
openable
is
going
to
build.
This
image
is
going
to
be
contextualized
with
the
conditionalization
package
on
the
per
nebula,
so
we
can
have
ssh.
We
can
have
networking
and
so
on.
A
Okay,
so
this
image
will
be
ended
with
the
conditionalization
package
of
open
nebula.
When
we
have
the
image
we
can
define
a
couple
of
templates
for
the
micro
vm
that
will
be
deployed
on
the
resources
with
the
firecracker,
so
we've
defined
two
templates,
one
for
the
server
component,
and
here
in
the
template
you
can
have.
You
can
define
the
memory,
the
cpu
and
we
are
going
to
associate
the
storage,
so
the
image
docker
image
to
this
template.
We
can
associate
a
network
in
this
case.
It's
automatic
selection.
A
That
means
that
when
we
are
going
to
deploy
at
runtime,
the
network
belonging
to
the
the
cluster
will
be
selected
and
an
important
thing
is
the
context
part.
So
here
we
have
defined
a
start
script,
so
the
stats
clip
will
get
some
information
from
the
vm
like,
for
example,
the
public,
ip
and
then
we'll
start
the
sorry,
the
it
will
start
the
kts
server.
A
Also
when
we
we
execute,
we
launch
the
catrice
server.
A
token
will
be
generated.
This
token
will
be
put
as
a
key
in
the
metadata
server
of
openable.
That
is
called
one
gate:
okay,
because
this
token
will
be
used
by
the
agent
in
order
to
start
and
to
connect
to
the
server.
A
So
we
have
defined
also
a
template
for
the
agent.
So
also
for
this,
you
can
define
memory
cpu.
We
associate
the
same
image
that
we
have
defined
for
the
server
here,
also
for
the
next
automatic,
so
the
difference
is
in
the
context.
So
the
start
script
is
different.
So
in
this
case
is
the
script
is
going
to
get
by
using
the
one
gate
the
metadata
server
up,
enabler
is
going
to
get
the
ip
of
the
server
is
going
to
get
the
token,
and
then
it
will
start
in
this
case.
A
Instead
of
the
server
is
going
to
start
the
agent
okay.
Once
we
define
these
two
templates,
we
can
define
a
template
for
the
service.
So
for
the
cluster,
where
we
have
defined
the
two
roles,
one
for
the
server
and
one
for
the
agent-
and
we
associate
the
two
templates
that
we
have
previously
defined.
A
Okay,
so
in
this
case
we
are
going
to
by
when
we
are
going
to
instantiate
the
service,
we
have
cardinality
for
server
equal
to
one.
That
means
one
server
will
be
deployed
and
we
will
deploy
two
agents.
In
the
case
of
the
agent
I
have
defined,
for
example,
a
minimum
of
vm
equal
to
one
and
maximum
vm
or
equal
to
ten,
with
a
cooldown
of
two
seconds
between
each
scaling
operation,
because
by
using
one
flow
you
can
scale
at
runtime
and
so,
for
example,
if
we
need
more
agents,
we
can
scale
to
more
agents.
A
And
here
we
have
an
attributes
that
we
are
going
to
pass
to
the
template
and
it
will
be
in
the
name
of
the
cluster
that
we
would
like
to
deploy
the
the
k3s
clusters.
Okay.
So
in
this
case,
we
we
are
going
to
deploy
on
pocket
and
the
cluster
packet
ms-1.
A
And
let
me
go
to
create
the
service
okay.
So
now
this
service
is
going
to
create
the
server
first
and
then
the
agent
okay.
A
Here
you
see
this
means
brawler
state.
While
this
will
create
the
k3s
server.
I
will
show
you
here.
As
you
can
see,
the
provision
is
finished,
and
here
we
have
now
the
you
see
the
cluster
that
is
in
a
running
state
and
also
in
the
open,
nebula
sandstone
graphic
interface.
You
can
see
that
they
also
now
are
on
so
are
available,
so
we
can
deploy
the
ktris
clusters
in
the
kts
cluster
here.
A
So
let's
look
back
to
the
so
here
the
service
is
going
to
now.
The
category
server
is
going
to
boot
and
to
run-
and
here
we
can
see
that
he
has
a
public
ip-
that
now
we
will
use
to
connect
to
deploy
agonists.
A
A
So
now
what
we
are
going
to
do
is
to
create
to
deploy
agonists
on
this,
so
we
can
create
the
namespace
and
then
we
can
install
ours.
Okay
in
the
namespace.
A
A
A
This
is
a
fleet,
so
we
can
deploy
different
several
game
servers,
so
we
can
define,
for
example,
the
replicas,
also
the
type
of
scheduling
we
can
decide
between
distributed
pocket
according
to
the
if
the
cluster
is
dynamic
or
the
static.
So
in
this
case
I
will
deploy
this
fleet,
and
this
will
be
two
game.
Servers
will
be
deployed
on
the
cluster
okay,
so
let
me
apply
first,
this.
A
Here
we
have
a
tool,
click
sonarpic
decide
to
current
and
we
don't
have
any
game
server
ready.
So
we
can
check
game
server
by
using,
as
as
you
see,
we
are
using
the
cubicity
app
because
agonize
now
extend
kubernetes,
so
you
can
use
the
api
of
bernie.
So
let's
check
the
game
server.
So
this
will
take
some
time
because
in
this
case
we
don't
have
the
image
of
the
exonotic
on
the
on
the
clusters,
the
kubernetes
cluster.
A
Meanwhile,
just
want
to
say
that
here
we
have
deployed
the
clusters
on
on
packets
right
clearly,
if
we
would
like
to
to
deploy
the
clusters
on
another
provider,
for
example,
networks.
So
the
first
thing
is
also
to
provision
the
other
resources,
for
example
on
aws.
So
here,
for
example,
let
me
go
and
also
start
the
provisioning
on
aws,
london,
okay,
so
let's
call
it
so
this
london,
and
here
we
can
define
again
the
number
of
servers,
the
number
of
public
ip.
A
This
is
the
nmei
nmei
for
centos,
and
here
we
can
select
the
instance
now
for
the
the
provision,
in
this
case
our
bare
metal
server
from
amazon.
So
when
I
click
finish
here
here
is
starting
the
provisioning,
or
also
this
I
will
show
you
hopes
in
this
case.
Another
cluster
has
been
created
here
also
another
host.
A
In
this
case
I
chose
just
one
host-
is
going
to
be
provisioning,
and
also
I
have
here,
the
the
instances
so
still
not
running
and
in
with
serum
is
going
to
run
to
create
now
here
instance
for
in
inland.
A
So
let
me
see
okay,
so
now
it's
running:
okay,
okay,
so
let's
go
back
and
see
if
now
the
game
server
already
okay.
So
meanwhile
the
images
need
bullets.
Now
we
have
two
game
servers
that
are
ready,
and
now
let
me
show
how
to
so.
We
can
connect
now
with
the
client.
A
A
A
A
Let's
let
me
quit
this.
A
And
so
we
can
use
the
the
cube
ctl,
for
example,
to
also
scale
game
server.
So
let's
say
we
need
more
game
server,
so
what
we
can
do
is
using
cubic
ctl
scale
fleet
and
then
for
sonotics,
let's
say
for
replicas.
Now,
if
we
check
the
fleet,
we
see
that
we
have
four
for
decided
the
current
and
only
two
ready,
but
in
few
seconds
we
will
have
four
ready.
Now
this
is
faster
because
we
already
downloaded
the
the
exonotic
image
on
the
servers.
A
Okay,
so
and
then
we
will
maybe
run
to
shut
down
some
game
servers.
We
can
set
replicas
equal
to
one
and
now
what
is
going
to
happen
is
that
the
servers
are
going
to
be
shut
down.
So
that's
how
it
works.
Okay,
so
we
can
scale
game
servers.
I
will
show
also
to
you
how
to
scale,
for
example,
also
the
k3s
cluster.
So
if
we
need
more
agents,
we
can
do
this
by
using
one
flow.
A
For
example,
we
can
click
on
the
agent
and
then
we
can
click
on
scale,
and
so
let's
say,
for
example,
that
instead
of
two
we
would
like
to
have
three
agents.
Okay,
so
what
is
going
to
happen
now
is
that
a
new
agent
is
going
to
be
deployed
on
the
resource,
and
this
will
join
also
the
cluster
okay,
so
this
is
for
the
demo
and
well.
I
think
that
this
is
a
whole.
A
I
hope
that
you
enjoyed
this
webinar
and
you
have
some
fun
with
this
deployment,
and
you
can
check
also
this
the
website,
one
h,
dot,
io,
where
you
can
find
information
about
dhcloud
architecture
by
a
open
level.
Okay,
thanks
to
all
for
attending
this
webinar
bye.