►
From YouTube: Agones Community Meeting Jan 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right:
okay,
awesome:
we
have
some
good
questions
in
the
comments
and
the
working
doc,
and
if
anybody
has
anything,
you
want
to
talk
about
just
go
ahead
and
add
it
in
feel
free.
The
first
thing
that
we
had
was
the
1.3
release,
but
you
know
before
that
happy
2020.
Everyone
welcome
to
the
welcome
to
the
roaring
20s.
B
C
C
The
I've
marked
the
upgraded
gr
pcs,
a
breaking
change.
Everything
should
continue
to
work
the
way
it
should
before,
but
I've
just
highlighted
it
just
in
case
something
does
break
or
let
us
know
so,
there's
that
it
should
be
all
find
your
PC
is
annoyingly
backward,
compatible.
I
think
the
only
thing
that
might
potentially
have
changed,
but
actually
I
did
get
had
was
some
of
the
rest.
Interface
for
watch
might
be
slightly
different,
but
I
actually
don't
think
it
actually
has
changed,
but
I
don't
know
if
anyone
was
actually
using
that
other
than
myself.
C
When
I
did
the
Unity
SDK
step
4,
there
is
now
a
node
package
hosted
on
github,
which
has
some
other
inherent
issues
literally
as
of
starting
today,
so
we've
reopened.
That
issue
now
we're
gonna
look
at
moving
that
to
NPM,
basically
on
which
I
don't
know.
Cuz
I,
don't
know
anything
about
NPM
and
I
asked
people
and
they're
like
it's
fine.
Apparently
you
have
to
authenticate
against
github.
C
C
So
if
you're,
just
using
the
node,
SDK,
nothing's
actually
broken,
ideally
we'd
still
like
you
to
upgrade,
but
eventually,
but
you
know
it,
you
may
want
to
wait
until
we
move
it
to
github
or
or
whatever
you
you
see
is,
is
appropriate
for
you
talking
about
new
features
just
because
we
can,
since
we're
talking
about
it.
Terraform
support
for
eks
is
nice.
You
can
find
all
the
recording
to
the
community
meetings
down
the
ghanians
website.
C
There's
a
bunch
of
validation,
new
improvements,
you'll
see,
there's
some
work
being
done
for
like
new
features
going
through
like
alpha
and
beta
stage,
so
they'll
have
feature
gates.
Things
like
that
watch
game
server
is
now
working
on
the
Unity
SDK,
so
that's
all
rounded
out
for
the
Unity
SDK
configurable
log
levels
on
the
iguana's
controller,
as
well
as
a
lot
of
load
of
logging.
Stuff
has
been
moved
to
debug
where
it
was
previously
info,
so
it
can
be
a
lot
less
bullets
for
both
I
think.
C
There's
still
some
work
there
to
be
done,
but
that's
that's
a
good
step
in
the
right
direction.
It's
more
examples
to
improve
stackdriver,
metrics
and
I.
Think
that's
probably
all
the
big
things.
Unless
anyone
has
anyone
else
that
doesn't
jumps
out
that
I
haven't
mentioned
Oh
preemptable
VMs,
that's
a
nice
bug
that
got
fixed
when
we
were
using
preemptable
VMs
in
GK.
C
He
specifically,
there
was
a
nasty
bug
in
there
where
the
node
would
disappear
and
then
come
back
in
the
same
pods
we'd
a
pyro
in
there,
and
so
the
IP
addresses
were
all
messed
up,
because
the
IP
address
is
very
different.
So
now
we
pick
that
up
and
we
get
rid
of
those.
So
there's
some
nice
good
bug
stuff
in
there
as
well
cool
I,
think
that's
yeah,
the
upgrade
for
gr
PC.
C
A
D
C
Move
to
open
telemetry,
as
far
as
I'm
aware,
our
telemetry
has
a
compatible
SDK,
so
you
can
actually
do
a
drop-in
of
open
telemetry,
but
open
telemetry
is
still
very
much
in
alpha
from
the
last
time.
I
looked
at
it
so
at
some
point
we'll
need
to
do
some
investigation
about
when
and
where
we
want
to
switch
over,
but
the
we
shouldn't
break
anything
when
we
switch
over.
It
should
still
be
all
the
same
metrics
on
one
off.
E
C
Should
flow
all
the
way
through
have
a
strong
feeling.
We
did
it
in
the
SDKs,
but
the
old
ones
should
still
work
like
it's
gr,
pcs
and,
like
I,
said
annoyingly
backward-compatible
I
have
to
look
today
to
do
as
decays
ode
feel
like
we
did
packaged
up
Jason.
Here
we
go
yo
PC.
One
point
two
point
two
yeah
yeah,
so
it
should
be
an
ST
case
as
well
yeah
we
advise
upgrading,
but
it
should
still
work,
hence
the
hence
the
breaking
chage
thing.
Just
in
case
somebody
runs
into
something.
A
F
On
here,
a
little
while
ago
after
I
was
looking
at,
we
had
a
long
list
of
PRS
after
the
holiday
break.
I
was
trying
to
go
through
and
triage
and
there
were
some
rather
old
ones,
a
pre-close.
What
about
the
Java
SDK?
That
was
really
old
and
hadn't
been
touched
for
a
long
time
that
I
want
to
discuss
here,
but
since
I've
got
on
the
agenda
Marcus,
it's
closed.
I
think
the.
C
F
F
That's
sort
of,
but
it
begs
the
question
of
like:
do
we
want
to
have
a
policy
for
sort
of
how
long
we
leave
as
PR
is
open
right
like
if
you
have
something
in
draft
I,
think
it's
fine
to
leave
it
in
draft
as
long
as
you
comment
every
once,
while
you're
still
working
on
it,
but
things
that
are
sort
of
open
and
become
idle,
especially
if
the
the
author
is
no
longer
showing
up
or
responding
to
comments.
But
it
feels
like
we
should
just
go
ahead
and
close.
F
Those
closing
a
PR
doesn't
actually
like
delete
people's
code
or
anything.
It
just
closes
the
PR
and
gets
it
off
of
our
dashboard,
and
you
can
always
reopen
the
PR
and,
if
you're
the
author,
to
put
back
on
the
dashboard
and
ping,
the
PR
guy
and
say,
like
hey,
I'm,
still
working
on
this.
If
it
gets
closed,
so
I
guess
part
of
this
is
a
PSA
that
like,
if
you
have
a
PR
and
you
haven't,
touch
it
in
a
while
and
it
gets
closed
and
still
working
on
it.
F
A
F
F
C
A
C
C
One
thought
I
actually
had
yesterday,
which
I
don't
know
if
anyone
likes
the
idea
of
this,
like
after
every
release,
usually
I'm
trying
to
do
like
it's
just
a
run-through
of
PRS
and
issues
to
see
if
there's
anything
stale
or
should
be
closed
or
anything
like
that,
I,
don't
know
whether
that's
a
good
idea
or
whether
we
should
have
something
that's
moral
like
hard,
lined
or
I.
Don't
know,
I
mean.
C
F
I
mean
even
if
don't
have
a
bot
like
Marco
saying
if
we
just
say
once
per
release,
we'll
go
through
and
close
like.
That
would
be
fine
to
start
and
if
there's
a
bot
we
can
just
flip
on
and
not
have
to
right,
then
we'll
do
that
yeah!
It's
a
lower
volume
that
if
we
just
do
it
ourselves,
I
think
that's
fine
yeah.
A
A
C
A
E
A
C
The
but
I
mean
the
buts
that
we've
got
like
Probot
another
one
for
Korean
East
will
do
the
thing
where,
like
a
number
of
days,
it'll
knock
it
as
stale,
and
so
everyone
gets
a
nap.
It
shouldn't
be
like
hey,
you
know,
and
if
nobody
comes
back
after
and
number
of
days
after
that,
then
they're
like
okay,
now
I'm
gonna
close
it
so
and
then
you
could
do
things
like
you
can,
pin
it
and
be
like
okay.
C
Now
we
know
this
is
like
never
gonna
go
stay
out,
but
like
make
sure
it
stays
there
like
there's
just
a
labeling
system.
So
you
know
as
soon
as
we
turn
one
of
these
on.
There
will
be
a
lot
of
notifications
across
the
book
and
we'll
need
to
go
into
a
whole
buncha
triage,
but
there's
nothing
I've
seen
that
will
like
be
like.
Ok,
you
know
like
we're.
Just
gonna
shut
this
down
now,
I,
just
like
I,
just
linked
to
the
the
stale
bought
and
Probot
looks
like,
which
does
pretty
much
that
well.
A
E
A
D
G
A
Maybe
it
was
I,
don't
know
somebody
bosses,
I
wouldn't
be
surprised
if
more
features
start.
You
know
coming
in
along
those
ways,
but
we
can
play
with
that,
but
yeah
Stephens
playing
like
I'm
a
fan
of
the
personal
touch
for
issues,
at
least
for
now,
when
we
get
to
kubernetes
level,
that'll
be
one
thing,
but
right
now,
yeah
I.
C
Yeah,
it's
gonna
talk
about
player
tracking,
so
this
is
the
feature
that
I've
been
working
towards
a
little
work
that
I
need
to
get
the
scylla
sitting
in
PR
is
more
around
like
feature
flags
and
getting
the
stuff
in
place
so
that
we
can
do
well
from
beta
stuff,
but
currently
the
design
is
up
there.
It
seems
like
no
one
has
loudly
complained
about
the
design
that
I
threw
in
place.
So
if
you
get
a
chance
to
have
a
look,
it
seems
to
be
reasonable.
C
Well,
update
that
on
a
semi-regular
basis,
but
we're
not
gonna
be
like
we're.
While
it
would
be
really
handy
to
be
able
to
say
things
like
hey.
What's
my
list
of
like
player
IDs
in
this
game
server
and
track
it
all
inside
a
gun
is
itself
and
even
the
SDK
and
expose
that
through
I
think
I
think
it's
probably
too
risky
for
the
CRT.
So
that's
that's
pretty
much
the
consensus
we
seem
to
come
to,
but
if
a
people
have
other
ideas
or
other
thoughts,
please
do
jump
on
otherwise
I'm
gonna
start
coding
stuff
soon.
E
C
Like
it,
it
leads
to
a
whole
bunch
of
other
interesting
things
too,
like
being
able
to
order
skill
based
on
player,
count
or
potentially
being
able
to
do
allocations,
based
on
like
capacity
like
give
me
a
game
server,
that's
covering
for
four
people
and
there's
some
really
cool
stuff
that
can
come
out
of
that
too.
So.
A
A
A
E
E
So
carries
at
the
office
today.
So
yes,
it's
just
something
we
realized
inside
internally
that
you
know
we
always
keep
this,
this
pool
of
ready
game
servers
and
there
might
actually
a
case
for
having
it
set
to
zero.
So,
for
instance,
we
have
a
matchmaking
phase
and
at
that
point
you
know
we
could
be
spinning
up
the
game
server
at
that
point,
so
we
could
reduce
costs
by
having
a
a
ready
full-size
of
zero.
H
E
E
C
F
I
said
for
the
the
autoscaler
supposed
to
add
things
when
they're
needed
his
daughter
seller
doesn't
look
for
failed
allocations
as
a
signal.
That's
it
right,
because
that
would
be
a
signal
that,
like
it's
a
size,
zero,
we
failed
medication,
the
autoscaler
says:
hey
an
allocation,
failed
limit
size
it
up
to
one,
but
then
I
think
something
would
also
have
to
retry
the
allocation
right.
There's
like
some
weird
yeah
route
logic
there.
That
would
have
to
happen
that
I,
don't
know
who
would
be
responsible
for
the.
C
F
C
You
could
almost
do
it
as
a
manipulation
of
the
autoscaler,
like
you
have
a
buffer
at
zero
at
zero
with
a
minimum
zero
and
then,
as
soon
as
like
the
matchmaker
comes
through
or
something
like
that,
or
maybe
like
an
allocation
fails,
then
you
push
it
to
a
buffer
of
one
or
two
or
you
sort
of
tweaking
your
auto
scaling
rules.
Maybe
you
know
it
probably
is
probably
big
game.
Specific
I.
E
G
E
C
E
C
C
C
F
D
F
D
F
C
F
F
C
F
F
F
E
Then
yeah
just
yeah
thanks
for
looking
at
the
issue
with
the
where
the
game,
the
pods
and
the
game
servers
get
out.
Sync
I
think
that's
that's
our
major
production
issue.
Now
it
doesn't
happen
very
often,
but
when
it
does,
of
course
we
have
to
go
in
and
fix
things
actually
I
think
we're
writing
a
workaround
right
now.
Well,
we
come
he's
working
on
that.
We
just
scanned
the
game
servers
and
the
pods
every
so
often,
and
then
just
delete
game
servers.
If
they're
asking
yeah.
C
E
E
The
highest
of
all
the
cloud
provided
so
so
I
just
wondered
I
think
there
was
a
question
in
chat
as
well
recently
about
why
you
know
supporting
newer
versions
and
obviously
we
do
it's
at
your
own
risk,
but
when
whether
we
could
have
some
automated,
like
I
mean
how
do
we
even
know
which
version
we
support
I
think.
Is
it
easy
to
add
a
later
version
and
say:
well,
we
don't
support
it,
but
at
least
the
tests
pass
or
something
so.
E
C
F
Think
the
other
other
answer
here
is
I'm
actually
sure
how
easy
would
be
to
run
that
test
with
Club
build
right.
The
parameter
is
cloud
built
to
run
on
lots
of
different
versions
might
be
hard
another
another
impetus
for
me.
Getting
back
to
proud.
I've
got
other
things
I'm
working
on
but
like
if
we
had
proud
I,
think
it'd
be
really
easy
to
just
say:
like
do
a
second
test,
you
know,
even
if
it's
at
a
low
frequency,
you
know
if
we
don't
want
to
spend
a
lot
of
money
on
it
like.
F
E
C
Appends,
how
much,
how
much
support
we
want
to
give
I
guess
really
like
if
it's
the
version
that
we
support,
then
yeah.
We
want
to
run
that
on
master
because
we
won't
know
if
it
breaks
in
any
part
time
how
like,
if
we're
not
supporting
a
version,
how
valuable
is
it
to
us
as
the
developers
to
be
like
hey?
We
know
it
works
here
because
then
we're
like
well,
we
don't
support
it
anyway,
so
like.
Why
are
we
putting
in
the
effort
I
think
it.
F
You
know
every
couple
of
hours
over
the
course
of
the
six
week
for
at
least
period
right
now,
I
think,
generally,
we
try
to
switch
that
kubernetes
version
relatively
early
in
the
early
cycle
and
not
like
right
before
we
cut
it
release
so
that
we
get
those
sort
of
miles
of
like
running
it
over
and
over
on
the
new
version.
Before
we
say
like
yeah,
we
think
it's
actually
gonna
work
because
we've
run
it
a
bunch
of
times
not
just
once.
I
guess.
C
D
C
F
C
We
hit
some,
we
hit
some
SSL
certs
tougher
than
112,
when
we
were
supporting
112
because
it
was
disappearing.
Their
shirts
were
an
expiring
syrup.
Bunch
of
our
tests
were
failing
on
our
documentation
because
we
have
to
go
check.
Links
I
mean
now.
We
support
113,
which
is
fine,
but
yeah.
Just
waiting
on
one
EK
has
to
come
through
now.
We
could
potentially
make
an
exception
in
this
case,
given
that
it's
been
such
a
while,
if
there's
good
reason,
I,
don't
know
I.
E
C
I
mean
if
you
really
want
I
mean
they're
all
just
go
tests
really,
so
you
can
send
a
version.
Push
it
up.
All
the
tests
are
in
the
test
repository
under
in
the
end
their
go
tests.
You
can
run
them
as
long
as
you've
got
you
kubernetes
credentials
locally.
It
should
just
work,
that's
that's!
How
I,
how
I
run
the
end-to-end
test?
I
mean
I
even
run
them
in
the
IDE.
F
Okay,
yeah
there's
also
make
targets
if
you
introduce
yeah,
get
check
out
and
you
could
literally
sync
to
the
release
commit
if
you
really
want
to
just
test
like
they're,
really
skim
it
and
then
just
run
the
next
targets
to
courage.
You
can
be
cluster
for
you
and
then
run
the
tests
against
them
right
and
that'll
really
simulate
exactly
what's
being
done
by
cloud.
Build.
E
C
Don't
know
where
we
got
with
that,
so
it
should
be
a
lot
less
now
in
this
release,
so
we
actually
have
options
in
the
controller
as
well.
So
we
have
both
options
at
the
controller
and
game
server
level
for
controller
at
the
install
time
game
server
level
at
runtime,
where
you
can
control
the
levels,
we've
also
been
actively
moving
a
bunch
of
stuff
to
debug,
I.
Think,
there's
some
more
stuff,
reconverted
debugger
I
think
there's
some
of
the
SDK
stuff,
the
game.
C
Server
SDK
server,
it's
probably
some
more
stuff,
they're
sort
of
taking
it
as
a
piecemeal
approach.
So
we
don't.
We
don't
move
too
much
across,
but
there's
a
running
ticket
there.
So
if
there's
certain
yeah,
if
there's
certain
stuff,
don't
yeah
if
there's
certain
stuff
that
you're
like
oh,
this
is
still
showing
up
a
lot
even
at
the
info
level
like
chuck
it
in
there.
I
think
I
have
a
note
somewhere
to
go
through
the
SDK
server
and
take
a
lot
of
the
a
lot
of
the
the
runtime
stuff
and
move
that
over
to
debug.
C
B
C
B
C
C
B
I
would
probably
start
working
on
them
next
down,
because
this
is
about
some.
We
have
forgotten
metrics
like
one
regarding
full
new
framework
for
metrics
like
we
would
have
like
a
bridge
between
open
census
and
like
API
before
consensus
in
this
one.
So
I
don't
know,
can
Eric,
and
we
start
from
this
one
and
then,
with
this
API
for
consensus
or
how
it's
gonna
be.
That's.
C
D
C
Come
opens
lemon
tree
I.
Just
don't
want
us
to
make
decisions
based
on
open
census
that
don't
work
for
open
telemetry.
For
some
reason
it's
my
only
concern
it
may
be.
Fine
I
have
no
idea
again,
I,
don't
know
anything
metrics
hooks,
but
yeah.
That's
all
good.
There
was
a
previous
PR
for
upgrading
open
census
to
like
who's
their
name,
that
we
ended
up
closing
because
it
was
really
big.
We
could
go
back
and
find
maybe
read
you
that
to
bring
back
the
the
upgraded
weapon
census,
because
there
was,
it
was
really
big.
B
In
the
by
the
way
so
yeah
generally
seller
can
produce
arbitrary
matrix,
it
means
that
it
can
select
the
name
or
so
in
the
week.
We
need
because
currently
open
census
revealed
all
these
things,
so
we
should
all
on
the
fly,
change
it
and
how
we
can
limit
in
that
way,
game
servers
not
to
waste
our
metric
space,
like
that
I
mean
there
was
some
tickets
about
this.
Okay
I
will.
A
H
C
How
to
pass
average
tree
metrics
from
game
server
to
open
census?
Oh
yeah!
Yes,
all
right,
I
got
you
yeah,
that's
it
I!
Think
that's
an
interesting
one!
I've
I've
definitely
heard
that
one
from
a
bunch
of
people
I
think,
ideally
in
some
way,
like
you
know,
like
basically
they're
like
yeah,
we
want
to
be
able
to
pass
any
kind
of
metrics.
We
want.
C
From
a
game
server
into
either
Prometheus
or
Prometheus,
co-founder
or
stack,
driver
and
I
think
that's
pretty
reasonable
to
make
that
to
make
that
easy.
Considering
we
already
have
that
connection,
it's
just
becomes
it's
it
honestly.
It's
an
interesting
question
about
how
we
want
to
expose
that
personally,
I'm
not
worried
about.
C
You,
know
users
overloading
our
step
like
a
metric
space,
that's
kind
of
on
them
like
to
to
shoot
themselves
in
the
foot
if
necessary,
or
have
a
back-end
that
can
handle
as
many
metrics
as
possible,
but
we
probably
need
to
come
up
with
some
design.
First
about
how
that
looks,
and
do
we
follow
like
urban
limit
like
we
could
take
our
Vint
ulema,
trees,
API
and
basically
just
mimic
it
and
then
just
pass
everything
through
I.
C
Don't
know
yeah
I
mean
then
we're
doing
up,
and
it's
gonna
be
a
bunch
of
work
because
we're
basically
gonna
expose
it
as
like
G
RPC
endpoints,
somehow
like
it
requires
some
research
I,
don't
know
what
the
right
way
is.
What
if
anyone
else
has
thoughts
who
actually
do
like
more
metric
work
than
I
do
or
could
we
ship
the
open
telemetry?
C
C
H
G
C
C
C
B
F
F
Totally,
if
you
have
thoughts
about
what
what
we
think
that
means
in
terms
of
graduation
in
terms
of
like
do,
we
have
adequate
the
test
coverage
to
make
sure
that
it
doesn't
progress
yeah.
If
we
want
to
write
any
docs
about
people
migrating
from
the
whole
digest
no
API
is
any
of
that
sort
of
stuff
would
be
great
to
to
jump
on
that
issue.
Yeah,
and
you
can
also
throw
throw
your
hat
in
and
say,
like
hey
I've
got
cycles.
C
C
H
B
H
Push
it
you
needed,
you
need
to
have
access
under
that
way.
I've
thought
about
what
I
like
is
helpful.
You.