►
From YouTube: Agones Community Meeting April 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Thanks
everybody
for
joining
us
this
morning,
we've
got
some
good
stuff
on
the
agenda.
I
am
trying
to
copy
paste,
but
for
whatever
reason,
my
goal
today,
but
a
couple
of
announcements
were
that
one
there's
a
really
great
event
coming
up
in
August
at
the
Linux
Foundation
is
having
the
open-source
summit.
North
America
it'll
be
in
San
Diego,
because
all
conferences
are
in
San
Diego.
Apparently
this
year
and
one
of
the
exciting
things
that's
happening
is
they're
doing
a
open-source
gaming
day.
A
So
this
will
really
be
like
the
first
time
that
there's
a
lot
of
content
that
is
focused
on
gaming
and
open
source,
and
it's
the
first
time
the
Linux
Foundation
has
done
something
like
this
and
Mark
and
I've
been
having
conversations
with
them
around
it.
So
this
is
really
exciting
to
see
I
highly,
encourage
everybody
to
attend
if
you
can
and
if
nothing
else.
Also,
if
you
have
an
interesting
talk
that
you
would
like
to
give,
the
call
for
presentations
is
currently
open
and
it
closes
on
May
4th.
A
So
that's
next
week,
which
means
May
3rd
is
when
everyone
will
submit
their
CFP.
But
if
you
have
never
spoken
before,
this
is
actually
a
really
great
opportunity
doing
present
in
the
next.
A
station
does
a
really
great
job
with
onboarding
new
speakers
and
giving
you
assistance
and
tools
to
get
started
so
I
highly
recommend
checking
it
out,
don't
be
turned
off
by
the
idea.
You
know,
speaking
at
a
Linux
Foundation
event,
if
you've
not
spoken
before
so
definitely
check
that
out.
At
some
point,
I
will
put
the
link
in
the
working
dog
I.
A
Don't
know
why
I
can't
copy
paste
today
the
other
announcement
is.
Hopefully
you
saw,
we
are
apply
well,
we
did
apply
actually
because
the
dead
language
two
days
ago
to
the
season
of
Docs.
So
if
you're
familiar
with
the
google
Summer
of
Code,
which
is
where
you
know,
open-source
organizations
apply
to
kind
of
have
a
college
student
work
on
their
project
over
the
summer
season.
A
Docs
is
much
the
same,
except
it's
for
technical
writers
and
it's
not
limited
to
just
summer.
A
So
one
of
the
things
that
we've
done
is
put
our
application
together
and
we
have
a
document
where
we're
tracking
ideas
for
documentation
that
our
donors
could
benefit
from.
So
this
is
something
that
anyone
who
is
applying
to
be
in
season
of
Docs
as
a
technical
writer.
It
will
use
these
project
ideas
to
put
together
their
application
and
they'll
also
use
it
to
determine
which
projects
they
want
to
work
on.
So
you
know
we
want
to
have
some
cool
stuff
for
them
to
work
on
it
and
pick
a
goodness,
but
definitely
look
at
that.
A
Docx
add
to
it.
If
there
is
documentation
that
you
know
you
have
gone
through
and
thought.
Oh,
we
really
need
this.
Should
we
not
get
accepted
to
participate
in
the
program?
We
will
still
take
all
of
these
ideas
and
we
will
move
them
to
a
documentation
repo
as
issues
so
we'll
still
track
it
all.
So
it's
definitely
helpful
for
if
this
particular
program
and
then
any
sort
of
documentation
efforts
in
the
future.
So
those
are
my
two
big
announcements.
Does
anybody
have
anything
else
that
they
would
like
to
share?
C
I
put
on
next
on
the
agenda
I'm
new
to
new
to
this,
because
I
think
there
was
one
earlier
this
year
that
I
didn't
get
to
to
attend
and
I
guess.
I
would
like
to
introduce
myself,
maybe
some
of
the
other
folks
in
the
room
who
are
new,
also
announced
themselves
and
since
we're
new
I
think
there
are
quite
a
few
new
people.
C
It
might
be
nice
to
go
around
and
have
the
people
that
have
been
here
for
a
while
also
introduce
themselves
so
that
we
can
sort
of
talk
to
to
know
who
we're
talking
to
on
the
other
side
of
the
the
call.
So
my
name
is
Robert.
Bailey
I've
been
a
longtime
contributor
to
to
kubernetes
top
stream.
Kubernetes
and
I've
worked
on
gke
for
many
years
and
recently
I've
switched
and
started
working
on
agonist
and
gaming
for
Google
Cloud,
guys
want
to
say
hi
yeah.
E
I'm
s
I
also
worked
on
kubernetes
for
three
years
and
now
just
recently
moved
to
cloud
get
me.
F
I'm
gonna
hi
I
think
is
the
Google
for
Yamaha
and
yeah,
or
also
contributed
OCONUS
and
Hospice
on
cratan
I'm.
G
H
And
hi
I'm
Jason
I'm
a
GBC
on
different
side
now,
but
I've
been
at
Google
for
about
four
years
and
I
joined
this
team
about
a
week
ago.
Well
this
week,
actually.
A
J
B
A
All
right,
let's
see,
mark
or
Robert,
do
you
want
to
talk
about
a.
A
C
C
So
there
was
an
issue
that
Jeremy
opened
in
Mon,
March
13th,
so
a
little
bit
more
than
a
month
ago
about
whether
we
should
keep
using
cloud
build
for
our
sort
of
build
automation
or
whether
we
should
switch
to
prowl,
and
there
was
a
proposal
document
that
was
linked
from
that
I
think
it's
it's
been
out
there
for
quite
a
while.
Hopefully,
people
have
had
a
chance
to
look
at
it
and
leave
comments.
A
Familiar
with
both
of
these
as
I
say,
I
mean
I
know.
A
lot
of
folks
are
already
familiar
with
kubernetes
and
contributing
there
so
you're,
probably
very
familiar
with
prowl
but
anyone's
not
or
is
not
familiar
with
cloud
build
like
you
can
quickly
do
the
highlight
of
the
differences
or
the
reasoning
behind
making
a
switch
estévez.
C
I'll,
just
I'll
pull
the
doc
up.
Real
quick
think
that
I,
probably
the
easiest
thing,
I,
think
the.
If
people
aren't
familiar
with
the
two
choices,
hopefully
you
guys
at
least
experienced
cloud
build
a
few
sent
any
PRS
cloud
build
as
part
of
Google
cloud
platform.
It
allows
you
to
take
code
from
an
open-source
repo
and
run
it
through
a
series
of
build
steps.
C
I
think
Marcus
had
to
jump
through
quite
a
bit
of
hoops
to
make
the
sort
of
results
of
those
Bill
SEP
successful
to
people
that
don't
work
at
Google
and
aren't
owners
of
the
project
in
which
the
cloud
build
is
running.
So
the
upside
of
that
is
that
that
work
has
already
been
done,
and
people
outside
of
Google
can
see
the
results,
which
is
great,
in
contrast,
Prowse
a
system
that
was
built
by
the
kubernetes
community
to
work
around
sort
of
a
similar
problem.
C
Humanities
used
to
use
something
called
Jenkins,
which
was
run
internally
at
Google
and
nobody
outside.
If
you
will
could
actually
see
any
build
logs
and
so
prowess
sort
of
started
out
as
a
dashboard
to
externalize
Jenkins
built
logs
and
then
eventually
turned
into
a
full-fledged
sort
of
automation,
build
and
github
workflow
system
builds
built
for
kubernetes
and
and
designed
to
scale
with
kubernetes.
C
So
I
think
you
know
some
of
the
big
advantages
of
prowl
are
that
if
people
are
familiar
with,
the
kubernetes
ecosystem
is
something
that
they
have
sort
of
already
run
across.
It
also
sort
of
tries
to
emulate,
as
best
as
we
can
with
github,
like
sort
of
the
internal
google
workflow,
if
you
will
in
terms
of
submitting
code,
so
it
has
support
for
things
like
you
can
add:
lgt
msu
setup
owners
files
in
your
repositories,
so
you
can
subdivide
ownership,
so
we
could
say
maybe
give
Alexander
ownership
over
this
the
C++
SDK.
C
So
if
he
can
approve
changes
to
that,
but
maybe
he
doesn't
want
ownership
over
the
unity
SDK
right.
If
he's
not
working
at
that
right,
so
you
can
sort
of
split
up
like
different
parts
of
the
repo
to
have
different
people
that
are
in
charge
of
them.
It
gives
you
a
sort
of
graduated
model
of
promoting
ownership.
You
don't
have
to
use
github
sort
of
very
coarse,
a
clean
tools
to
say
now,
all
of
a
sudden,
your
maintainer
of
the
repo,
and
you
have
sort
of
ultimate
powers
over
everything,
including
deleting
it.
C
Hopefully
nobody
wants
to
do
that
on
accident,
but
it
allows
you.
You
know
people
to
set
labels
without
having
edit
access
to
3po.
It
allows
people
to
set
milestones
not
having
edit
access
there.
All
sorts
of
commands
that
are
built
in
and
it'll
automatically
merge
code
once
it
sees
that
the
right
labels
are
applied
to
PRS,
which
has
been
really
useful,
so
I
think
the
biggest
advantages
in
my
mind
are
that
it
really
sort
of
provides
a
graduated
pathway
for
people
to
become
maintainer
z'
of
a
project.
It
enables
build
automation.
C
The
results
of
that
build
automation
are
easy
for
everybody
to
see
it's
easy
to
rerun
builds
if
they
flake
or
fail
it'll
automatically
run
build
if
new
commits
or
push
it
to
PRS.
If
you
want,
you
can
set
it
to
automatically
squash
commits
when
things
get
merged,
so
you
have
a
cleaner,
submit
history
and
really
it's
just
sort
of
designed
to
like
help
like
work
around
some
of
the
things
that
make
it
help
sort
of
kind
of
hard
to
deal
with,
especially
as
a
project
scales.
C
So
you
can
really
sort
of
spot
which
tests
are
flaking
and
see
how
they
flake
over
time
and
jump
to
specific
test
results
and
that
functionality
is
not
available
within
cloud
build
and
so
getting
that
functionality
was
something
that
we
thought
would
be
really
useful
as
we
want
to
like
run
tests
periodically
and
see
how
they
like
perform
over
time.
I
guess
that
was
one
of
the
is.
You
can
run
tests
either
on
a
schedule
or
on
PRS,
so
you
can
say,
run
this
test
every
half
hour
or
run
this
test
once
a
day.
C
So
if
we
have,
you
know
large
scale
performance
tests
like
you'll
chris
been
working
on,
we
can
run
those
you
know
once
a
day
and
they
take
a
lot
of
resources
and
we
can
run
them
post
submit
which
is
really
nice.
So
you
might
build
a
tell
based
on
my
background
and
what
I've
been
talking
about,
that
I'm
kind
of
in
favor
of
prowl.
Here
you
know
I
understand.
There
are
a
lot
of
also
benefits
of
sticking
with
cloud
build
because
it
is.
C
B
So
I
would
love
suit
like
not
have
to
do
that
and
be
automated
having
squashing
just
as
a
thing
rather
than
having
to
bug
contributors.
Hey.
Can
you
squish
this
down?
Every
single
time
is
a
pain
that
was
also
really
lovely.
You
actually
mentioned
something
just
now
that
also
grabbed
my
attention.
This
I
have
a
topic
later
on
about
being
able
to
set
approvers
on
certain
sections,
I
think
having
more
approvers
and
being
able
to
distribute
that
workload
would
also
be
good,
because
I'm
also
doing
a
lot
of
that
work.
B
That's
not
add
to
get
other
approvers
by
the
way.
It's
just
saying,
but
being
able
to
just
repeat
that
particular
levels
of
expertise,
I
think,
would
be
actually
really
cool
and
probably
increase
the
maintainability
and
sort
of
the
longevity
of
this
project.
So
we
have,
we
can
distribute
that
expertise
around
so
I
think
that
all
sounds
really
awesome.
The
thing
that
was
missing
for
me
in
this
document
is
kind
of
like.
What's
our
implementation
guide
like
how?
How
do
we
do
this?
B
B
It's
probably
my
only
my
only
concerns
outside
of
that
there's,
probably
some
like.
There
are
some
cons
there,
I
think
like
in
terms
of
documentation
and
stuff,
but
outside
of
that
I.
Don't
have
any
major
objections.
I
think
it'll
actually
take
some
work
away
from
me,
which
is
actually
gonna.
Make
me
very
happy
Ida
purely
selfish
level,.
C
Yeah,
that's
a
great
point.
I
think
that
the
doc
was
mostly
put
together
to
sort
of
say
like
here
are
our
choices
and
what's
good
or
bad,
and
not
like
here's,
how
we
actually
do
that
I
think
if
people
say
yes,
we
want
to
do
Prower
like
I'll,
at
least
in
my
head,
have
an
implementation
plan.
Sort
of
ready
to
produce.
C
A
Very
least
Robert
I'd
love
to
get
it
written
down.
That
way
we
can
use
it
for
like
in
this
doc.
Open
match
is
considering
it
as
well,
among
others
as
well,
for
what
it's
worth
I'm
a
big
fan,
I
think
it
makes
sense
for
us
to
use
the
similar
tools
that
could
run.
Eddy's
is
using-
and
you
know
I
from
my
experiences
with
how
it
works
great.
So.
A
A
C
C
A
Agree
and
I
think
and
you'll
hear
me
like.
There
are
certain
things
that
I
don't
want
to
do,
but
this
is
one
that
I
think
make
Colonel,
says
and
I
think
that,
especially
because
it
does
have
the
full
kubernetes
community
behind
it,
like
specially
to
Mark's
point
about
implementation
and
support,
and
things
like
that,
like
we've
got
a
large
group
of
people
that
we
can.
You
know
reach
out
to
for
help.
I've
already
had
some
conversations
with
folks
and
I'm
sure
you
have
too
Robert
so
yeah
yeah.
B
I
I
think
the
point
I
was
probably
also
making
is
I.
Think
we're
making
we're
starting
to
hit
the
velocity
of
contributions
that
we're
hitting
the
pain
points
where,
if
I
disappear
for
three
days,
think
everything
slows
to
a
crawl
and
so
anything
we
can
do
to
automate
that
I'm.
Definitely
a
favor
for
yeah.
C
Somebody
mentioned
that
they
were
on
a
flight
right
now,
so
we
should
probably
paint
the
issue
and
say
you
know
we're
gonna
set
a
timeout
of
you
know
three
or
four
days
from
now,
and
at
that
point
the
decision
is
final,
to
give
people
who
aren't
here
or
maybe
don't
want
to
speak
up
on
the
call
a
chance
to
object--
right
so
finalized
right
this
second,
we
should
say
there
have
been
no
objections.
We
will
finalize
an
X
number
of
days
unless
there
are
objections.
A
B
B
This
is
a
one
point,
I
plan,
so
very
committed
to
getting
one
point
out
this
year:
Mika's
RPMs
on
the
coal
I'm
sure
people
who
there
are
people
who
want
that
to
be
faster,
which
is
fine
but
sure
level
answer
like
in
terms
of
I
sort
of
broke
this
down
into
two
levels:
one
sort
of
functionality.
So
if
people
are
like
hey,
what's
good
or
what
do
we
think
has
to
be
in
the
1.0
release,
what
I've
done
is
essentially
a
sacrificial
draft.
B
If
you
have
a
look
basically
at
the
next
milestone,
that
has
a
list
of
stuff
in
it,
it
actually
has
some
stuff,
that's
in
progress,
but
there's
a
few,
a
few
bits
and
pieces
in
here
that
are
not
yet
completed
or,
like
things
like
documentation
on
how
to
upgrade
Gunnar's,
which
is
a
topic
of
conversation
for
today,
like
the
unity
plug-in
SDK,
some
statistic
collection,
we
still
haven't
finished
off,
yet
so
cetera,
just
functionality-wise.
I
think
we're
actually
fairly
close
functionality
wise
to
what
I
would
consider
1.0.
B
But
if
you
disagree
with
anything
that
either
you're
like
this
shouldn't
be
in
1.0
or
there's
something
missing
that
should
be
in
1.0.
That
means
them.
We
can
be
like
okay,
you
can
either
create
a
ticket
or
make
a
comment
on
the
ticket
saying
hey.
This
should
definitely
be
in
1.0,
and
then
we
can
add
it
to
the
next
milestone
and
rotate
that
forward
or
you're.
Like
decision,
you
know
same
thing
like,
but
a
comment
on
a
ticket
and
just
be
like
this
definitely
shouldn't
be
here.
B
B
Hardening
scalability,
I
know
a
lot
of
that
work
is
happening.
Thank
you,
printer.
Thank
you.
Okay,
I
think
you've
been
doing
a
lot
of
that
work
and
it's
making
huge
progress,
which
is
awesome,
I,
think
Robbie's
point
to
here
and
all
I
can
expand
on
that,
like
we
can
do
a
lot
of
hardening
work
and
then
basically
decide
on
what
our
scale
is
and
then
we
can
increase
it
from
there.
Probably
did
you
want
to
talk
about
that?
You've
you've
probably
got
more
experience
here
in
that
area.
Yeah.
C
Think
that
that
sooner
we
get
to
one
dyno,
the
the
better
in
terms
of
adoption,
because
people
will
start
using
it
and
then
we'll
start
to
make
sure
we're
sort
of
building
the
right
thing
for
people
and
we
can
grow
the
scale
of
our
time.
I
think
that
was
something
that
secured
Bernays
did
really
successfully
we're
the
first
release
of
kubernetes
had
sort
of
a
laughably
small
number
of
nodes
that
were
supported
in
the
the
other
project
at
the
time
were
all
sort
of
pointing
fingers.
Saying
like
this.
C
Isn't
this
isn't
a
real
thing
and
and
the
people
that
work
on
the
project
understood
how
to
make
it
scale
and
that
just
wasn't
a
priority
at
first,
because
there
were
no
users
that
needed
the
scale
now
I'm
wondering
if
we're
in
a
similar
situation
where,
if
nobody
needs
the
scale.
Yet
if
we
should
get
everything
else,
sort
of
buttoned
down,
so
that
we
can
say
it
works
at
a
small
scale
and
as
people
start
to
need
it
at
a
bigger
scale,
we
can
make
it
work
at
a
bigger
scale.
So.
B
C
I
think
that
would
be
great
unless
somebody
says
like
no
I
really
need
this
right
now
right.
If
somebody
is
looking
for
a
specific
scale
target
that
we
don't
think
works
yet,
then
maybe
we
should
think
about
postponing
one
auto
until
we
can
hit
that
otherwise
I
think
you
know,
based
on
the
testing
that
ochres
done,
we
might
have
some
ideas
about
where,
where
we
are
very
comfortable,
supporting
meds.
G
Some
long
running
tests
or
try
to
basically
figure
it
out
like
overall
at
the
performance
of
the
organism,
components
together,
not
just
me
for
like
a
couple
hours
on
but
like
over
the
days
of
France,
so
I
wrote
a
simple
load
test
basically
have
about
120
130
concurrent
clients,
basically
allocate
game
servers
and
then
I
also
update
the
simple
UDP
game
server
which
basically,
after
three
minutes,
they
basically
called
shut
down
and
then
so
they
go
away.
So
I
want
to
see
this
cycle,
I
mean
like
the
short-lived
games
or
certain
period.
G
G
As
you
can
see,
the
unhealthy
server
start
keep
on
increasing
and
they
weren't
going
away,
and
at
some
point
it
was
basically
they
I
mean
stuck
around
and
take
over
all
the
past,
and
then
every
I
was
actually
with
the
help
of
maze
and
figure
out
the
proctor
problem,
and
it
just
submitted
fakes
screen
back
to
today,
like
summation.
If
I
have
rates
is
fine
but
two
days
you
a
better
view,
because
I
start
running
right
now,
because
I
was
testing
somewhere
else,
but
currently
the
unhealthy
servers
are
now
being
cleaned
up
automatically.
G
So
there's
a
PR
out,
hopefully,
should
be
submerged
and
also
I,
with
the
recent
changes
from
mark.
Basically,
we
separate
the
games
from
allocation,
which
doesn't
creates
a
scr
DSCR
games
allocation,
that's
basically
kind
of
turn
on
the
system
because
it
was
creating
and
tried
deleted
and
it
was
really
taking
out
of
resources.
So
now
we
are
in
a
better
shape.
So
actually
we
show
here.
So
this
is
the
allocation.
So
now
now
you
know
we
for
about
nine
ten
thousand
servers
in
about
two
minutes
or
less.
We
can
allocate
all
of
them.
G
So
then,
so
we
can
support
around
ten
thousand
game
servers
and
then
quickly
allocate
them
so
I
think
that's
pretty
I.
Think
okay
for
our
I
think
one
does
zero.
I
guess
like
and
that's
basically
constantly
we
are
hitting
that
one
and
then
we
can
go
up
to
125
130
publications
per
second
in
the
peak,
so
I
believe
that
even
this
will
be
a
fairly
I
mean
acceptable
performance
right
now,
so
we
can
do
some
further
improvements.
I
was
actually
talking
to
put
enough,
so
she
is
looking
at
some
allocation
improvements.
G
Definitely
those
improvements
has
to
be
like
taking
care
of
at
some
point,
but
I
don't
think
that
this
improvements,
like
I,
mean
we
were
doing
about
seven
eight
allocations
per
second
now
we
can
do
about
twenty
per
second,
so
that's
one
cluster,
exactly
so,
I
think
that
I
was
taking
actually
Robbie
as
well
like
in
terms
of
how
many
like
knows,
we
should
have
a
ganas
I,
think,
that's
something
that
we
should
think
about
that.
We
should
not
really
have
too
many
game
servers
in
one
cluster.
G
There's
a
physical
point
of
failure
on
that
one
that
they
should
have
multiple
clusters
on
Game
Servers,
which
we
distantly
should
distribute
around,
but
on
that
one
I
think
this
should
be
in
terms
of
performance,
very
acceptable,
I
guess
for
for
the
1.0,
the
one
thing
that
we
should
like
how
many
we
were
discussing
with
a
Yorick
that
the
allocation
may
be
at
some
point
separated
from
the
control
you
have.
You
should
have
a
separate
control,
because
allocations
tends
to
be
very
heavy
and
also
sometimes
in
critical
and
interfere
with
the
controller.
G
So
they
saw
that
you
wanna,
create
more
game.
Servers
are
going
to
say
about
the
allocations
takes.
All
those
fighter
is
kind
of
idea,
racing
each
other
so
and
then
also
they
are
running
in
one
path.
So
we
should
at
some
point
considering
having
multiple
paths
that
they
don't
we
can
do
upgrades
so
probably
upgrade
story
requires
them
to
be
moving
to
multiple
paths,
so
we
can
just
take
each
of
them
down
upgrade
and
go
on
through
this
next
one.
So
those
are
things
probably
we
should
I
mean,
like
maybe
focus
after
1.0,
but
I.
G
D
C
Think
on
the
issue
mark
you'd
sort
of
put
performance
and
scale,
and
maybe
we
should
change
that
more
to
reliability
as
sort
of
Oh
Clarissa
pointing
out.
We
were
hitting
sort
of
performance
targets,
and
now
we
want
to
make
sure
that
we're
reliable
and
doing
like
sort
of
the
soap
I
think
is
more
important.
Now
than
trying
to
speed
things
up.
Yeah.
G
Thank
you
so
then,
once
we
are
reliable,
they
can
also
go
back,
maybe
make
it
more
faster
or
bus.
There's
also
another
story
that
we
are
trying
to
like
distribute
this
multiple
clusters.
So
yes,
I,
think
probably
the
right
way
to
go.
I
don't
have
one
clusters
right,
I,
think
we
are
hitting
the
limits
of
the
clusters.
I
guess
give
or
take
I
mean
so,
but
that
thing
we
should
not
really
spend
too
much
time
at
the
I
mean
it's
a
priority,
but
yeah.
B
G
So
there's
a
good
question
so
right
now
for
the
I,
basically
max
out
all
the
like,
the
I
have
120
and
notes.
They
are
all
ready
to
go
so
I.
Don't
do
the
auto
scaling
on
the
kubernetes
side.
Cuz!
That's
what
basically
affect
the
I
mean
like
because,
if
I
hit
a
like,
they
have
only
20
notes.
Then,
if
I
had
to
allocate
more
so
it
will
take
weight,
conveyances
I
mean
come
back.
So
that's
another
test
we
had
to
do,
but
this
is
that
I
am
pre-allocate
all
the
notes.
Basically,
no.
K
It
sounds
like
at
least
for
me
from
my
point
of
view:
you
you
in
terms
of
location
and
how
many
games
ever
you
can
have.
This
difference
should
be
enough,
but
what
I
would
like
to
see
now
is
how
does
that
work
with
the
cuban
Ichi
auto-scaling,
because
this
is
definitely
how
people
we
use
it.
You
know
they
wanna.
They
want
to
make
sure
that
they
don't
spend
too
much
in
the
cloud
yeah.
Absolutely.
G
Think
that
I
mean
I
think
it
does
scale,
but
I
mean
like
I.
Don't
wanna
I
want
to
test
one
thing,
one
parameter
so
if
I
had
the
equipment
is
also
allocating
and
we
are
trying
to
allocate
like
increase.
So
that's
why
I
was
trying
to
reduce
to
a
one
parameter
or
minimize
it,
but
that's
definitely.
We
should
look
at
that.
One
too.
G
B
G
B
A
timeframe
yeah
that
would
be
really
good,
IV
I'm
also
curious
as
well
about
like,
because
I
know
we
and
I
talked
about
like
changes
in
there
at
the
ger
them
about
how
packed
it
is
on
allocations
as
well,
and
how
that
affects
things.
So
I'd
love
to
be
able
to
tweak
that
and
rerun
this
and
see
if
they
exactly.
G
Make
a
change
really
break
it
or
make
it
better.
It's
not
really.
It's
not
a
unit
test
kind
of
comes
back
and
say
it's
working,
so
it
took
a
while
to
kind
of
write,
streamline
few
things.
So
this
is
over
the
last
couple
months.
I
guess
like
multiple
changes
we
got
here,
I
think
we
are
in
a
better
place
by
that.
It's
a
good
point
say
all
that
doesn't
include
the
Coburn
sort
of
scaling
in
the
mix
here,
but
we
should
definitely
look
into
it.
Yeah.
E
K
G
So
this
doesn't
the
game:
servers
are
basically
I
mean,
is
fairly
small,
like
simply
that
we
have
as
an
example.
So,
as
I
said,
I
mean
like
we
should
try
changing.
Okay,
once
we
now
stable,
I'm
gonna
see
how
take.
If
you
have
like
a
sizable
like
veil
image
and
how
long
it
will
take
to
really
know
I'm.
K
E
K
B
B
K
The
second
point
I
wanted
to
talk
about
is
in
terms
of
reliability.
I
know
that
last
time,
I
checked
the
controller
is
meant
to
run
as
a
single
replica.
Is
that
something
that
you
guys
think
it's
okay,
because
usually,
if
I
always
been
told
that
it's
a
better
idea
to
a
set
of
applicants
of
a
single?
So
do
you
think
we
should
think
think
about
this
song?
Absolutely.
D
I
think
we
should.
We
should
also
couple
this
with
ill
cares.
Question
like
should
separate
our
patients
from
the
rest.
This
allocations
are
totally
on
the
critical
path
like
if
we,
you
know
immediately
scale
the
fleet,
only
nothing
that
bad
is
gonna
happen,
but
if
you
fail
to
allocate
service
out
of
animal,
that's
gonna
be
pretty
pretty
bad
yeah.
So
yeah
opening
favor
of
moving
to
first
replicated
deployments
and
having
a
separate
deployment
or
allocator.
G
D
G
That
not
if
you
do
that,
like
we
can
have
multiple
for
allocations,
which
we
can
paralyze,
that
the
controller
can
be
still
one.
No,
because
if
we
can
I
mean,
if
you
have
10
seconds,
delayed
controller
to
not
to
create
a
fleet,
it
won't
be
end
of
the
world
so
that
the
commands
will
just
kick
off
another
pod
and
we'll
start
again.
Somehow
is
just
John,
because
that
should
be
okay,
but
it
won't
be.
Okay.
Eric
says
that
allocation
is
not
basically
out
there
right.
G
C
Of
the
Carreras
controllers
are
pretty
dumb
and
like
assume
that
they're
effectively
own
the
entire
resource
and
running
multiple
ones
of
those
is
effectively
like
you
run
three
pods
and
they
take
a
global,
lock
and
so
you're
not
really
paralyzing.
The
effort,
all
you're
doing
is
providing
sort
of
hot
failover.
Instead
of
having
to
restart
a
new
pot
where
I
think
with
the
allocation
stuff
it'll
be
easier
to
actually
paralyze
the
effort
across
I
think
we
can
start.
D
D
Have
one
more
comment
about
one
panel,
this
person
understood
like
to
me
the
biggest
thing
is
figuring
out
the
compatibility,
promise
and
compatibility
story
like
what
what
wondering
what
is
important
for
people
like
once
we
go
to
1.0.
What
kind
of
you
know
backwards?
Compatibility
are
people
expecting
its
semantic
versioning.
Basically,
what
we
should
be
using
and
promise
that
there
will
not
be
a
breaking
change
within
one
point:
X
branch,
yeah.
K
B
B
Yeah-
and
this
is
where
I
was
also
putting
it,
come
in
on,
like
I
added,
when
we
were
talking
about
changing
some
of
the
groups
for
the
the
series.
Basically
anything
we
want
changes
we
want
to
make
to
the
CR
DS.
Now
we
need
to
make
them
now
before
we
go
1.0.
That's
super
important,
especially
as
the
CRD
webhook
conversion
stuff,
is
still
not
fully
baked
in
kubernetes,
which
is
unfortunate
just.
L
A
M
Yeah
sure
I
don't
have
come
is
on
audio
or
not
and
yeah.
It's
just
a
problem.
I
guess,
because
we're
using
a
gonna,
sim
production
I
think
we're
on.
Oh
wait.
Oh
wait!
Oh
all
right
one
now,
but
we
did
have
some
problems
on
upgrading
a
gonna,
strum
I
think
five
to
to
eight
so,
and
there
was
some
discussion
then
about
rebuild
your
cluster
but
yeah
that
that's
a
bit
problematic
in
production,
which
I
mean
we've
worked
around
it,
but
I
think
as
well.
M
B
B
E
D
K
Well,
I've
seen
pollution
that
took
them
48
hours
to
roll
out
a
new
image.
The
reason
why
is
some
some
some
some
game
server?
You
can
replay
at
the
end
of
the
game,
so
there
is
some
people
out
there
that
may
play
for
the
years
hours
straight
because
they're
enjoying
the
game,
so
you
cannot
kick
them
out
and.
K
This
is
a
this
is
actually
happening
on
the
game
lifts.
So
when
we,
when
we
used
to
call
out
big
changes
for
a
game
server,
not
we
don't
have
any
control
of
our
game
live,
but
just
the
game
server
itself.
It
will
take
48
hours
because
the
way
it
works
is
you
slowly
drain
all
the
game
server
and
you
about
certain
fleet,
where
you
start
your
new
allocation
to
and
then
the
drain
draining
time
is
very
slow,
I
think.
C
It's
beautiful
to
talk
about
well,
that
messin
point
out,
which
is
they're
sort
of
at
least
three
different
things.
We
should
think
about.
Upgrading
one
is
the
kubernetes
cluster
version
itself,
upgrading
to
much
like
the
ygones
control,
plane
and
resource
definitions.
Upgrading
and
three
is
the
game
server
itself.
Upgrading
so
I
think
Sarah.
What
you're
describing
is
the
game
developer?
C
Upgrading
the
game,
server
binaries,
while
the
only
okones
version
static,
yeah,
definitely,
and
that
could
be
over
48
hours,
I
think
we'd,
probably
like
the
agonies
control,
plane
and
resource
definition,
Cyprien
faster
than
48
hours,
and
without
impacting
the
game
servers.
You
know
as
much
as
possible
any.
G
G
For
your
matchmakers
to
say,
hey
give
me
from
this
new
version,
so
they
will
be
able
to
allocation,
will
be
able
to
give.
B
You
from
even
another
interesting
question
of
you:
do
you
want
to
have
some
or
Steve
are
chemi
each?
If
one
has
no
audio,
do
you
do
you
want
to
have
some
ability
to
basically
smoke
test
the
new
version
in
production
like
I'm,
just
thinking
like
you
do
an
upgrade
something's
bad,
it's
broken
and
you
need
to
be
able
to
roll
back
like
DUI,
assuming
you
might
want
that
kind
of
functionality
like
how
do
you?
M
C
C
About
having
multiple
clusters
right,
cuz,
one
way
to
do
this
is
you
have
two
clusters
and
you
upgrade
one
to
the
new
version
of
a
go
nice.
Let
that
soak
for,
however
long
you're
comfortable
and
then
upgrade
the
other
one,
and
then,
if
you
do
have
an
outage,
you
like
still
have
capacity
in
the
end,
the
other
cluster
and
you
can
do
potentially
use
load
balancers
to
shift
traffic
overs.
The.
B
Fun
thing
like
that
was
that
was
my
initial
comment
like
running
to
clusters,
but
basically
be
able
to
do
a
red
green
deploy
between
each
version.
I
think
I
think
is
super
nuts
and
then,
if
something
does
go
wrong,
you
don't
go
down
for
all
back
you
just.
You
know
you
of
a
switch
in
your
matchmaker
to
send
traffic
to
one
the.
G
B
B
B
G
B
B
So
probably
helm
really
is
your
best
bet
would
also
be
really
like
I.
Don't
want
you
doing
this
in
production
because,
like
yeah
I,
don't
want
to
fail
in
production
but
we'd
love
to
see
if
there
are
things
that
fail
doing
upgrades
when
you
do
your
testing
to
see
if
helm
fails
between
one
version
and
another
and
what
those
failures
are,
and
that
might
also
be
useful
information
for
us
going
forward
as
well.
Just.
D
E
D
That
multiple
hubs
will
not
work
ever
right.
So
what's
our
kind
of
upgrade
ability
promise
here,
do
we
want
to
only
promise
that
you
will
be
able
to
allow
upgrade
from
the
previous
version
and
will
basically
compensate
for
any
differences
between
you
know,
incompatibility
or
because
I
think
supporting
multiple
like
like
really
old
version
to
the
current
version?
Upgrades
is
really
hard
and.
L
D
B
D
L
M
B
C
D
B
I
think
karena
days
is
actually
done.
A
reason
whatever,
especially
of
late
like
I,
can
still
use
I.
Think
the
beta
deployment
and
it'll
just
convert
it
over
to
what
the
new
deployment
is,
for
example,
so
there's
API
like
their
API.
The
signatures
still
match
it.
Just
it
brings
you
up
to
date
automatically,
which
I
think
super
nice,
but.
C
Q
raised,
you
cannot
upgrade
from
1.14
straight
to
one
at
18
right.
You
have
to
go
to
one
at
15
and
then
whatnot
16
and
then
one
at
17
and
then
one
at
18,
because
again,
like
each
of
those
upgrades,
may
trigger
different,
like
conversion
processes
behind
the
scenes.
Then
you
skip
that.
If
you
skip
them,
you
are
gonna
miss
and
your
clusters
gonna
break
right.
So.
C
Were
well
yeah,
we
may
want
to
defer
that
till
next
time,
I
just
open
the
issue.
Last
night
we
have
seven
minutes
left
I.
Think
April's,
trying
to
keep
an
eye
on
the
clock,
but
I
did
want
to
just
very
briefly
mention
that
there
is
sort
of
a
pattern
in
kubernetes
of
creating
software.
That
knows
how
to
upgrade
other
software
and
deal
with
operations
of
other
software,
and
so
how
am
I
think
is,
is
sort
of
one
way
to
do
this
right.
C
We
can
sort
of
defer
the
helm
to
sort
of
be
that
lifecycle
manager
or
we
can
write
custom
software
if
we
think
that
that
ago
it
says
calm,
gated
enough
that
it
needs
a
little
bit
more
hand-holding
that
we
can
get
from
home.
So
I
put
some
links
in
there.
Please
go
to
go
ahead
and
look
at
that
issue
that
I
linked.
Maybe
we
can
talk
about
that
next
time,.
A
Yeah,
so
just
real
quick,
we
a
little
bit
five
minutes
or
so
Mark's
got
a
link
in
the
working
doc.
We
do
need
more
approvers,
so
everybody
check
it
out.
We
would
love
to
have
you
ping
myself
or
mark
if
you
have
questions
about
specifics
of
how
to
get
there,
but
we
would
love
to
have
more
approvers
and
then
I
want
to
give
Alexander
at
Globen
some
time
to
talk
about
deployment
with
terraform.
I
I
I
B
A
I
K
I
figured
if
it
works,
I
think
it's
a
good
idea.
One
big
reason
is
I,
think
GCP
is
not
in
China
and
and
that
you
know
it's
a
big
market
for
the
games,
so
not
being
able
to
be
close
to
the
this
market
is
an
issue
especially
for
a
big
production
that
I've
been
running
for
a
while
and
Nunda
what
they
now
want
to
attack
this
market.
So
definitely
I
will
look
into
at
least
if
it's
working
and
if
we
already
have
a
documentation
explaining
how
it
works.
That's
good,
if
you
want
to
do
it.
K
I
B
K
A
You
alright,
so
we
are
perfectly
at
time
how
beautiful
did
that
work
out
anything
that
we
we
did
have
a
couple
of
things
that
we
didn't
get
to
this
time.
We'll
add
them
to
these
English
for
next
week.
I
mean
not
next
week.
Sorry
next
month!
In
the
meantime,
if
you
have
anything
you
want
to
discuss,
please
do
file
an
issue.
We
want
to
keep
everything
on
github
and
we
can
have
conversation
there
and
any
other
additional
help
or
anything
that
you
need.