►
From YouTube: Agones Community Meeting 5.23.19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
At
the
last
meeting
we
discussed
brow
and
I
said
we'd,
give
it
like
a
week
or
so
for
equals
it
raising
the
objections,
an
issue.
No
objections
have
come
up,
but
I
have
not
yet
had
a
chance
to
like
actually
right.
Here's
the
plan
of
how
we're
going
to
switch
so
yeah
I
did.
That
should
be
in
coming
soon
that
this
meeting
popped
over
my
counter
I
was
like.
Oh
my
gosh.
It's
already
been
a
month.
We
should
I
should
make
more
progress
than
that
in
a
month,
although
other
things
have
been
distracting.
B
C
C
C
C
Will
so
I'll
put
a
link
in
the
chat
and
then
in
the
working
docks
as
well?
You
can
submit
your
address
and
everybody
gets
t-shirt
as
a
thank
you
for
being
here
and
being
part
of
the
community
mm-hmm.
A
No,
that's
fine.
T-Shirts
are
important.
Like
super
important.
All
right,
you
should
say
how
you
know
you
have
a
real
product,
all
right,
so
kubernetes
version
compatibility,
I,
know,
there's
been
some
backwards
and
forwards
on
this.
So
I
read
a
message
a
little
bit
14
minutes
ago
about
I
feel
like
there's
kind
of
two
layers
to
this
one,
which
is
like
which
version
of
client
go.
Do
we
use?
A
Like
basically
is
if
like,
if
we
start
developing
at
some
point,
twelve
and
people
using
one
for
an
eleven,
then
we
don't
actually
like
have
much
confidence
that
or
we
don't
really
know
what.
Whether
a
stability
on
one
point
eleven
is
actually
good
or
not.
Do
we
need
multiple
test
clusters
with
different
versions?
I,
don't
know
how
to
solve
this
problem.
What
do
people
think
I.
A
In
theory,
it
should
be
fine,
but
it
becomes
the
same
problem
right
like
we
haven't
tested
it.
So
if
we
haven't
done
in
to
end
tests,
we
have
none
performance
tests,
there's
no
way
to
know
for
absolute
sure,
so
again
the
same
problem
right
like
the
API
services
that
we're
we're
touching
with
this.
So
they
should
be
fine
Robert.
You
just
came
off
mute,
so
I
assume
you
have
thoughts
and
feelings.
Yeah.
B
I
was
gonna,
try
let
other
people
jump
in
first
I.
Think
to
answer
question
Steve
the
the
answer
is.
It
depends
on
how
much
testing
resources
we
want
to
put
in
on
our
side
right
like.
If
we
want
to
say
we
will
qualify
a
single
version
of
kubernetes.
Other
versions
may
work
but
use
it
your
own
risk,
then
that
sort
of
the
I
think
the
least
amount
of
resources
we
couldn't
put
in.
B
If
we
want
to
put
in
more,
we
could
qualify
multiple
versions
of
kubernetes,
and
it's
really
just
a
question
of
like
how
many
testing
resources,
how
much
time
do
people
have
to
triage
failures,
etc
that
we
can
do
like
as
a
product
has
a
community
I
think
sort
of
the
straw
man
of
let's
pick
one
and
start
with
that
seems
really
good.
Like
Mark
said
it
should
work,
sort
of
forwards
and
backwards
at
least
one
version
without
any
trouble,
because
that's
sort
of
the
general
guarantee
from
kubernetes
in
terms
of
the
client
libraries.
B
It's
you
know,
sort
of
sneaks
in
and
and
sometimes
there's
like
a
security
problem
where
things
are
broken
on
purpose,
right
and
so
I
think
the
the
guarantee
so
the
guaranteed
kubernetes
version
Mark
was
talking
about,
would
be
like
we
have
tested
this
one.
We
know
it
works
and
other
ones
are
likely
to
work,
but
they're
a
little
bit
more
use
at
your
own
risk
and
if
we
think
that's
not
sufficient,
we
can
guarantee
multiple
versions
right.
B
E
A
I
think
and
I
think
to
that
point
as
well
when
and
I
brought
it
up
in
ticket
where
I
was
like.
Maybe
we
should,
if
we're
gonna
pick
one.
Maybe
we
pick
like
n
minus
one
of
whatever
is
like
supported
across
all
the
cloud
providers,
so
we're
we're
not
bleeding
edge
but
we're
we're
still
kind
of
rolling
forward.
Yeah.
B
The
thing
I
my
chance,
I
get
like
that
over
time
is
likely
to
be
less
useful
right
because,
like
those
the
cloud
providers,
the
N
version
is
getting
slower
from
adopting
bleeding
edge
of
karate
'he's
right.
So
I
think
we're
likely
at
some
point
if
we
continue
that
pattern
to
be
like
a
couple
of
releases
behind
sort
of
the
newest
release
of
kubernetes
yep,
like
maybe
two
or
three,
which
starts
to
get
a
little
dicey
in
terms
of
upstream
kubernetes
support.
B
You
know
that
kind
of
sucks,
so
I
think
like
if
we
can
identify
this
is
when
we
think
something
is
stable
right,
like
I,
think
your
basically
that's
what
your
your
heuristic
is
like
when
they
don't
get
like
stable
and
most
people
will
be
comfortable
using
it.
You
know
I.
Think
you're
here,
stick
for
now
is
probably
fine,
but
over
time
it
might.
We
might
need
to
adjust
it.
Okay,.
A
That
seems
reason
I
mean
we
could.
We
could
always
just
say
like
once:
yeah
I
was
I
always
feel
like
like
at
the
very
least.
It
needs
to
be
adopted
across
the
three
major
cloud
providers,
so
like
I,
think
1:13
isn't
quite
there
yet
from
memory.
The
last
time
I
looked,
which
is
why
I
was
always
like:
okay,
maybe
112,
and
then
but
yeah
like
as
soon
as
it
hits.
It
seems
like
a
bad
time
to
to
do
our
shift
right
now.
I
hear
your
point,
it
definitely
has
been
getting
slower
right.
B
So
like
114
is
out
and
like
not
on
cloud
providers,
113
is
been
out
for
a
while,
but
it's
not
on
all
cloud
providers.
So
you
know,
112
is
super
safe
but
then,
like
a
head
of
kubernetes,
isn't
one?
Is
there
building
115
next
right,
so
we're
already
starting
to
see
like
a
3
version,
skew
and
like
that's
about
how
far
back
they're
usually
willing
to
back
port
security
fixes
and
so
forth.
Right
back
in
my
day
we
got
a
new
kubernetes.
B
A
Right
gonna
put
a
note
in
here
just
so
that
does
that
sound
reasonable
to
everyone
I
mean
I
can
put
I'll
put,
it
seems,
like
general
consensus,
is
to
target.
A
112
or
n
minus
1
will
make
a
comment
on
the
tickets
and
confirm
consensus
there
that
counts
folk
consensus.
Apparently
that's,
okay,
any
any
major
objections
for
anyone.
I
believe
also
there's
some
really
nice
scheduling
improvements
in
112.
That
should
help
with
our
performance
too.
So
I'll
be
curious
to
see
what
that
is.
A
F
Yes,
so
we
were
playing
with
and
I
gonna,
so
slightly
more
advanced
level
recently
here,
and
we
discovered
that
the
default
installation
only
really
works
if
you're
trying
to
create
fleets
in
the
default
namespace
right-
and
you
know
that
could
be
how
most
people
are
using
agonize
these
days.
So
that's
basically,
my
question
are
so
the
problem
if
you're
trying
to
use
a
non
default
namespace
is
that
there
is
no
service
account
in
that
namespace.
F
A
A
F
Seems
to
me
that
it
would
make
more
sense
if
we
just
supported
anything
space
without
having
to
kind
of
pretty
clear
it,
and
that,
unfortunately,
would
require
changing
how
the
gate,
how
the
SDK
sidecar
works,
to
kind
of
get
rid
of
the
dependency
of
those
service
accounts
and
the
other
ways
around
it.
But
I
was
sort
of
curious.
Other
people
see
there
needs
to
support
more
than
one
namespace
per
cluster.
F
B
F
E
Let's
say
for
our
case:
we
have
three
different
game
modes
that
we've
run
in
a
cluster,
but
we're
just
using
the
default
namespace
for
everything,
okay,
okay-
and
are
using
that
the
communities
cluster
for
anything
else.
Or
is
it
just
pouring
on
us
now
we
just
buy
everything
in
the
one
possible.
So
you
are
a
kind
of
API
server
as
well
matchmaker
everything.
D
F
F
F
A
That's
an
interesting
question:
your
create
this
conversation
every
day
like
with
like
would
you
you?
Let
me
ask,
let
me
see
if
I
can
think
this
this
sir,
would
you
want
to
be
able
to
basically
dynamically
add
game
servers
to
basically
any
are
between
name
space
if
that
makes
sense,
rather
than
you
have
to
go
back
to
and
create
a
service
a
like
if
you
like,
yeah
wow,
this
makes
my
head
hurt.
A
Basically
right
now
you
have
to
define
your
names
faces
at
install
time.
Would
you
be?
Would
you
be
happier
if
you
were
just
like
you
know,
I
the
admin
does
the
installation
and
they
know
what
like
they
were.
Just
like
it's
fine
and
then,
when
the
user
comes
to
use,
is
it
there
they're
able
to
just
pick
any
namespace
they
want
to
put
it
in
or
you
control
that
somehow
and
you
don't
have
to
care
about
creating
service
accounts
or
anything
like
that.
That
makes
sense.
I'm.
D
F
B
D
B
Documentation
says
you
can
run
a
helm,
upgrade
to
add
additional
namespaces
after
the
fact,
so
you
don't
have
to
necessarily
pre
install
them,
and
presumably
that's
just
how
I'm
going
through
and
like
recreating
new
service
accounts
right.
So
if
you
have
the
ability
to
create
service
accounts
later,
you
should
be
able
to
use
new
namespaces.
A
Cool
alright
me
again:
Wow
cool,
actually
rub
it
out
of
this,
so
maybe
it's
Roby's,
so
always
the
words
so
I
sat
down
and
started
poking
at
the
allocation
stuff,
which
had
recent
improvements
for
throughput
and
Hilary's.
Not
here,
unfortunately,
made
some
really
great
strides
in
this,
but
what
I
realized
is
the
packet
algorithm
so
trying
to
keep
everything
tight
for
on
the
cloud
environment
is
not
as
good
as
it
used
to
be,
not
by
a
large
margin.
A
So
we
end
up
with
the
bit
of
the
Swiss
cheese
effect
happening.
The
latest
version,
but
we
get
much
higher
throughput
through
so
like
pros
and
cons.
I
had
some
ideas
on
potentially
being
able
to
basically
precache
a
sort.
It's
a
sorted
list
of
game
servers
that
people
could
be
able
to
use
because
I
figure
that
allocation.
B
F
A
I
think
that's
reasonable,
I
think.
If
that's
different
for
some
people,
then
we
could
probably
handle
that,
as
maybe
as
a
separate
use
case
like
we
see
that
there's
only
like
five
game
servers
that
come
out,
then
we
were
like.
Oh,
we
handled
that
specially
I
don't
know,
and
so
what
I've
been
what
I've
been
doing
is
yeah.
So
as
the
as
it
comes
in
I
create
a
list
of
list
of
game
servers
referred
and
a
list.
A
A
And
then,
when
you
go
to
allocate,
you
can
just
pop
one
off
the
list?
Atomically
and
then
you'll
never
get
the
contention
between
multiple
things
trying
to
get
the
same
one
and
then
probably
what
I
would
end
up
doing
is
that
maybe
every
30
seconds
is
an
arbitrary
number
go
back
and
just
resync
it
every
so
often,
so
it
kind
of
becomes
eventually
consistent
in
terms
of
the
ordering
and
I
think
that's
probably
good
enough
to,
and
then
the
two.
A
The
two
concerns
I
have
with
that
which
I
write
up
in
the
ticket
to
is
one
if
you
had
overlapping
game
server
allocations
like
they're,
both
applying
themselves,
the
same
fleets
or
something
like
that.
So
you
have
two
different
cases
or
by
the
time
we
do
allocations,
and
we
want
to
split
it
between
multiple
pods.
What
I
would
I
think
we
could
actually
do.
A
There
is
add
a
little
bit
of
jitter
to
the
sorting,
so
they'll
be
in
slightly
different
orders
on
each
of
the
cases
and
then
they'll
eventually
consistently
come
back
to
each
other,
and
so
you
shouldn't
get
as
much
contingent.
We
can
even
just
play
with
how
much
individuality
to
the
sorting,
that's
my
theory,
but
up
do
that.
So
that's
like
a
second
round
type
idea.
What.
F
A
The
the
concern
I
had
so
it
was
more
actually
about
concurrent
requests
on
the
same
allocation
type
asking
for
exactly
the
same
game
server
each
time
that
was
my.
That
was
my
concern,
and
so,
if
we,
if
we
sorted
every
time
which
is
cool
and
that's
fine,
there's
nothing
to
say
that
you're
gonna
always
get
the
same
one
or
like
one
within
the
set
of
the
node,
and
so
you
could
potentially
be
colliding
a
lot
and
having
a
lot
of
retries,
especially
at
high
throughput
as
you
in
that
allocation
retried.
That
was
my.
A
F
A
A
A
F
Bug
was
talking
about
using
batching,
basically
right
so
much
completely
eliminates
contention.
You
basically
have
all
the
requests
that
have
come
to
the
server.
At
the
same
time
they
all
kind
of
sit
together
and
say:
okay,
we
all
want
those
15
servers.
You
take
this
one.
You
take
this
one,
you
take
this
one
yeah
right,
so
that'll
be
much
more
efficient
and
you
will
be
perfectly
back.
I.
Think.
A
F
A
F
A
F
A
Wow
I
did
a
whole
bunch
of
work
for
the
other
thing,
but
I
like
I
like
this
idea
a
whole
lot
better.
Actually,
yeah,
okay,
I'm
gonna,
throw
away
my
work.
It
was
fun
though
alright!
That's
that's
why
we
have
these
conversations.
No!
No!
That's
good!
It's
good!
Better!
It
better
to
have
this
the
simpler
solution
than
what
I
had
before.
Oh
maybe
I
can
reuse
it.
That's
fun
too!
Awesome
I,
really
like
that
idea.
What
was
I
gonna
say
now
yeah.
F
You
will,
but
then
you
have
to
do
some
kind
of
charting.
Okay,
basically
say
that
pod
number
one
only
prefers
you
know
things
with
some
random
identifier,
modulo
or
equal
to
one
or
two
make
sense,
and
this
requires
some
tuning,
but
we
can
again
for
for
large
populations.
It
just
works
beautifully
for
smaller
populations.
It
can
get
tricky.
Okay,.
A
G
So
we
recently
added
an
alligator
service
that
basically
right
now
is
acting
as
a
reverse
proxy,
so
I'm
feel
introduced
at
G,
RPC
API,
so
instead
of
API
server,
does
the
look
make
a
server
extension
for
allocation?
Does
the
allocation
for
us
this
allocator
service
will
do
that?
I
well
we.
We
think
that
it's
gonna
improve
the
performance
because
it
can
scale
independent
of
basically
API
server
and
controllers.
G
Also,
it
helps
pit
the
cases
that
we
have
a
match.
You're
outside
kubernetes
I
need
to
make
coal
into
kubernetes
and
authentication
authentication
happens
with
mutual
tell
us
which
will
start
certification,
and
we
have
a
mindless
of
sirs
that
the
other
six
service
exits,
what
else
yeah.
So
there
are
multiple
steps
for
this
task
complete
right
now
we
have
the
reverse
proxy
in
place.
F
So
I
think
we
should
probably
talk
why
we're
doing
this
right
so
that
so
there's
two
use
cases.
One
is
so
that
the
matchmaker
completely
outside
of
the
cluster
can
talk
to
it
and
it
would
not
require
any
kubernetes
credentials,
which
you
know.
Industrial
easy
to
get
attack
outside
is
hard,
but
also
for
them
for
them
for
them
hopping
between
the
allocation
services.
For
oh.
G
A
If
I
remember
correctly,
as
well,
I
think
we
talked
about
this
from
the
kubernetes
thing
from
the
the
service
account
credentials
from
outside
in
in
upcoming
kubernetes
versions.
Those
are
gonna,
get
cycled
a
lot
faster
yeah.
That's
a
primary
reason
why
we
are
doing
this
yeah,
so
it
kind
of
becomes
untenable
to
maintain
a
service
account
credential
from
outside
the
cluster
or
at
least
in
upcoming
versions,
because
yeah
I
mean
security,
good.
A
A
C
B
I'm
looking
through
the
back,
they
issue
backlog
and
there's
an
interesting
feature
request
from
someone
about
a
month
ago
about
allowing
game
servers
to
run
without
ports,
because
they
want
to
use
a
bunch
of
proxies
in
front
of
their
game.
Servers.
I,
don't
know
if
that's
a
use
case.
Other
people
have
run
across,
but
I'm
curious
to
learn,
learn
more.
If
anybody
has
tried
look
doing
that
or
is
interested
in
doing
that,
it's
issue,
749
I,
don't
think
the
person
who
wrote
it
is
here.
B
Because
one
of
the
reasons
we
use
the
node
IP
is
directly
is
to
reduce
latency
right,
like
yeah,
that's
sort
of
done
explicitly
for
or
higher
performance,
so
I'm
kind
of
curious.
What
the
use
cases
are
for
sort
of
explicitly
saying
we
don't
care
about
the
higher
performance
like
if
there's
a
different
class
of
games
at
that
supporting
or
how
that
how
that
would
work.
A
B
Did
that
did
that
help
of
it?
Yeah
I
was
just
curious
if
any
of
any
belts
on
that
call
as
interested
in
the
Zeus
case
as
well.
It's
something
that
we
should
think
more
about
and
discuss,
or
if
it's
we
should
sort
of
poke
this
person
and
see
if
they
can
show
up
next
month
or
maybe
or
put
more
info
into
the
ticket,
because
I
mean
it's
these
case.
I
hadn't
really
thought
about
as
much
because
you
know
we
were
mostly
focused
on
as
much
reduced
latency
as
possible
right
yep.
A
Interesting
there,
specifically
using
WebSocket,
which
is
a
whole
other
fun
thing,
and
the
conversations
I've
seen
with
pretty
much
anyone
using
WebSockets
for
games
and
we've
had
a
few
of
them
in
chat
as
well.
It's
always
a
really
fun
one,
because
you
can't
send
a
WebSocket
to
an
IPM
port
because
you
need
an
HTTPS,
so
it,
and
so
you
need
to
have
some
sort
of
DNS
entry.
And
then
how
do
you
get
your
DNS
entry
to
propagates
the
high
people
in
time?
I
think
the
best
solution
we've
seen
in
chat
is
running.
A
Custom
DNS
servers
very
much
like
if
you've
seen
those
DNS
tunneling
systems
or
you
could
plug
the
IP
on
the
front
with
like.
Wouldn't
you
know,
adults
replace
with
dashes
or
something
and
then
have
that
guard
the
thing.
So
you
can
walk
us
out
it
and
that
seemed
actually
to
work
reasonably
well
from
what
I
seen,
but
if
they
want
to
route
that
through
some
sort
of
other,
either
DDoS
or
o'the
gateway
type
thing,
that's
probably
not
unreasonable,
given
that
it's
just
WebSockets
yeah.
B
A
Which
other
people
like
if
you're
doing
web
games,
which
I've
seen
people
want
to
do
that's
a
problem
it
has
before
yeah
it's
a
it's
a
fun
tricky
one
I
had
I
have
no
objections
with
just
like
you
know.
We
could
probably
just
loosen
the
validation
and
say
like
yeah.
If
you
want
to
do
if
you
want
to
do
no,
no
in
airports,
that's
on
you!
That's
fine!.
A
C
Did
want
to
call
out
just
for
folks
to
be
aware
that
we're
in
the
process
of
setting
up
a
new
github
organization
for
all
of
the
Google
gaming
open-source
projects
and
as
part
of
that
a
gona's
will
migrate
to
the
new
org,
and
we
will
also
then
take
advantage
and
split
out
into
three
repos.
So
they'll
be
a
repo
for
the
master
branch.
You
know
where
the
where
the
magic
happens
and
then
we'll
have
a
repo
for
community
and
we'll
have
a
repo
for
documentation.
C
So
it
should
basically
we'll
get
organized
and
we'll
also
be
able
to
take
advantage
of
some
of
the
github
tooling
functionality
around
team
and
things
like
that
put
in
some
automation
and
hopefully
make
everyone's
lives
a
little
bit
easier.
This
should
be
pretty
seamless
to
most
folks
github
will,
as
part
of
the
migration.
You
know,
do
redirects
and
your
stars
and
watches,
and
all
that
will
just
transfer
over,
but
essentially
instead
of
going
to
the
GCP
github
org
you
can
go
now
it
is,
it
does
exist,
it's
just
very
dormant.
C
At
the
moment
we
haven't
done
the
migrations
yet,
but
that
is
coming
soon.
So
just
be
aware:
we'll
send
out
full.
You
know,
updates
and
notice
when
we
are
doing
the
migration.
Just
so
folks
know,
and
then,
if
there's
any
weirdness,
we
can
address
it
but
fingers
crossed
it
should
be
a
pretty
seamless
process.
B
B
Cool
thanks,
yeah
I
was
gonna.
Ask
it's
not
the
word.
If
we're
splitting
things
out
into
multiple
repos
have
we
thought
at
all,
maybe
it's
too
premature,
but
about
splitting
like
the
SDKs
for
like
different
languages
and
different
repos.
Would
that
be
useful
or
is
it
nicer
to
keep
them
all
headed
together?
That's.
A
An
interesting
question
I
think
there
probably
should
be
a
good
discussion
about
how
we
want
to
split
things
up
before
we
split
things
up
and
we
can
I
mean
once
we've
migrated.
We
can
start
some
meta,
tagged
stuff,
I,
have
opinions,
I,
don't
know
where
they're
yeah
but
I
think
there's
a
media.
There
is
interesting
questions
about
whether
the
SDK
should
be
pulled
out
or
or
kept
the
same.
A
C
That
would
technically
be
a
decision
for
the
TSC,
so,
oh,
you
guys
can
discuss
it
and
we
can
kind
of
you
know
as
part
of
the
migration
process
will
have.
The
full
idea
of
you
know
how
things
get
pulled
out,
and
then
we
can
look
at
what
we
want
to
create.
I.
Think.
A
Today,
usually
I'm
just
trying
to
think
actually
that's
not
entirely
true
I'm,
just
thinking
today,
with
this
SDK,
more
of
the
new
features
have
come
through
with
each
release.
But
that
being
said,
like
the
new
self,
allocating,
for
example,
is
implemented
and
go,
and
then
someone
just
recently
did
it
a
node.
But
what
constitutes
a
release
of
an
SDK?
A
You
could
probably
just
get
the
SDK
from
the
the
repo,
which
is
what
probably
a
lot
of
people
are
doing,
unlike
that
self
allocation
isn't
isn't
hasn't
propagated
through
to
like
C++
or
rust
or
some
other
things
just
because
no
one's
got
so
yet
so
maybe
having
their
own
self
releases,
an
individual
SDKs
I,
don't
know,
there's
probably
lots
of
options
there.
So.
C
C
B
A
A
B
At
some
point
it
worth
it
and
before
that,
it's
probably
not
worth
it
right.
It's
like
easier
to
have
sort
of
the
single
mono
code,
repo
and
deal
with
one
set
of
builds
and
releases
and
so
forth,
right,
yeah,
I
think
April.
If
you
haven't
already,
can
you
create
an
issue
so
people
that
aren't
here
today
or
don't
see
the
meeting
notes
or
the
recording
know
that
this
is
coming
yeah?
That's
how
do
I
do.
E
C
A
C
All
right
well,
if
nobody's
got
anything,
we
can
wrap
up
a
little
early
for
today.
I
know
the
holiday
is
next
week,
and
so
whole
lot
of
folks
have
already
started
vacation
or
just
thinking
about
it,
but
I
hope
everybody
has
a
good
rest
of
the
month
and
I
will
chat
again
in
June
and
of
course,
in
the
meantime,
if
you
have
anything
that
comes
up
or
anything
you
want
to
discuss,
do
you
reach
out
and
you'll,
be
hearing
more
about
the
org
migration
and
all
that
good
stuff.
Soon,.