►
From YouTube: Agones Community Meeting 7.27.19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Sounds
good
what
if
I
have
any
dinner,
so
0.11
release
finally
rolled
out,
which
is
awesome
yeah.
Thank
you
to
everyone
that
contributed
and
all
hard
work
that
went
into
testing,
etc.
That's
fun!
We're
breaking
stuff
we're
trying
to
make
it
so
we
break
stuff
less.
B
But
what
was
it
gonna
say?
But
we
have
some
good
stuff
in
here,
including
the
unity
SDK,
which
has
been
long
awaited
so
I'm
sure,
there's
some
people
who
are
pretty
excited
about
that
and
probably
the
other
fun
one
that
that
we've
had
a
lot
of
requests
for,
or
at
least
I
have
very
much
directly,
which
is
the
port
pass-through
so
for
a
lot
of
legacy
systems.
They
like
to
start
up
on
this
like
to
report
the
what's
the
word
same.
B
They
like
to
report
to
a
main
registration
service,
what
port
they
started
on
and
that's
the
that's
the
port
that
they
expect
that
people
will
connect
on
before.
We
had
that.
That
was
a
problem
because
it
would
start
on
an
arbitrary,
like
the
port,
2600
or
something,
and
then
the
port,
that,
oh
god,
I
sort
of,
saw
him
be
random
and
that
would
suck
so
a
port
pass
through
or
allows
those
two
numbers
to
be
the
same.
And
that
makes
a
lot
of
people
really
happy.
So
for
a
legacy.
Game
servers
which
is
good.
B
Speaking
of
breaking
changes,
then
I
can
probably
like
move
into
the
next
one,
which
is
like
0.12,
which
is
trying
to
make
things
not
be
breaking
any
more
because
we
have.
We
have
some
breaking
changes
in
the
pipeline
by
looking
at
the
things
we
have
left
that
are
listed
in
our
in
our
next
milestone,
which
is
basically
everything
we
were
looking
at
freezer
for
a
1.0
I
think
what
was
the
word
I
was
gonna,
say:
I
think
we
can
probably
aim
for
this
to
be
a
1.0.
B
Assuming
we
get
everything
done,
it
may
or
may
not
work,
but
I.
Think
from
what
we've
we've
outlined
as
1.0
or
at
least
stable
API
surface,
which
we
can
I
think
we
can
call
the
1.0
we
can
get
it
done,
which
is
pretty
cool.
So
there's
a
bit
more
work
there
to
do
and
a
bunch
of
try
and
get
those
breaking
changes
done,
particularly
around
like
moving
groups
and
the
CRTs
and
that
kind
of
stuff.
B
C
C
B
Is
attending
the
event
through
through
hash
users,
yeah.
A
A
Totally
fine,
basically
Oz
con
Portland
in
about
a
month's
and
the
reason
I
ask
is
we're:
we've
got
the
Google
booth
there
and
we're
you
know
doing
some
office
hours
and
stuff
as
part
of
that
overall
programming
and
one
of
the
slots
is,
for
you
know,
kind
of
just
to
meet
up
and
talk
about
open
source
and
gaming.
So
hopefully
we'll
have
some
other
folks
there
that
join
up-
and
you
know
we're
trying
to
get
this.
This
movement
started.
A
But
there's
not.
We
don't
have
any
talks
at
the
event,
unfortunately,
but
we
will
have
some
sort
of
informal.
You
know
meetups
and
things
like
that.
So
the
other
one
is
the
open
sourcing
gaming
I
don't
want
to
call
it
a
day,
cuz,
it's
a
half
day
open
source
in
gaming
at
the
open
source
summit,
which
is
San
Diego
in
August.
A
A
It
is
a
good
conference
if
you're
interested
in
kind
of
the
philosophical
side
of
open
source,
rather
than
you
know,
technology
where
coop
con,
of
course,
is
all
it's
pretty
much
all
kubernetes
and
other
projects
so
definitely
check
it
out.
We're
excited
about
the
you
know,
open
source
gaming
day
mark
speaking
and
I.
Think
we've
got
one
with
unity
as
well
and
mark
speaking
twice
so
mark
like
half
is
ax.
I
am.
A
So
yeah
marks
marks
the
works,
the
headliner,
but
if
you're
definitely
interested
in
going,
let
me
know,
and
then
the
next
thing
we
have
on
here
was
update
on
the
new
github
org.
Has
everybody
I
mean
your
stars
and
everything
should
have
migrated
over
so
hopefully
seen
it
and
not
had
any
issues
with
it.
A
We
have
three
repos
now
so
there's
one
for
the
poor
one
for
documentation
and
one
for
community
staff
on
the
community
stuff
side,
I'm
working
on
migrating,
some
of
our
existing
documentation
over
to
that
one,
and
also
adding
some
new
stuff
in
on
the
website,
side
mark
and
I
discussed
it
and
I
told
him.
We
could
wait
until
after
1.0
before
we
try
to
migrate
over
all
that
code.
So
but
long
range,
it's
good
for
us.
You
know
one.
A
You
know
maintainer,
Xand
and
members,
and
things
like
that,
so
we've
got
some
cool
stuff
in
mind
for
it,
but
we're
just
kind
of
in
that
you
know
holding
pattern
of
not
making
too
many
wild
changes
until
1.0
ships,
but
everything
should
be
pretty
smooth
sailing
at
this
point,
but
a
couple
hiccups
right
afterwards,
but
I
think
everything's
smooth
it
out
now.
But
if
you
see
any
weirdness,
of
course,
just
always,
let
us
know.
C
Mark,
while
he's
coming
back,
I
was
gonna,
ask
question:
he
probably
notice
it
in
the
or
get
self.
There
are
no
like
people
as
part
of
the
org
I.
Don't
know
if
that's
intentional,
if
we
just
haven't
gotten
to
it
yet
I
know
that,
like
with
prowl
in
particular,
like
you
have
to
have
people
in
org
for
them
to
be
able
to
use
like
the
prowl,
BOTS
and.
B
A
C
A
I
mean
because
the
org
is
Google
owned.
There
are
some
lockdown
security
considerations
that
the
open-source
program
office
has
for
us,
and
so
you
know
we
can't
do
quite
as
much
as
kubernetes
does,
since
theirs
is
owned
by
CN
CF,
but
we'll
definitely
look
at
you
know
kind
of
the
different
processes.
I
know
for
most
of
the
external
folks
they're
being
added
as
contributors,
but
you
know
we
can.
Certainly
you.
C
C
B
C
Seem
like
that
very,
like
most
members
were
Googlers
and
not
other
contributors
and
I'm
I'm,
hoping
that
the
Google
for
games.
We
can
be
a
little
bit
more
like
it's
everybody
working
on
the
projects,
not
just
Google
people,
and
so
hopefully
that's
okay
with
OSP,
oh
and
and
especially,
if,
like
Marco
Sayyaf,
if
you
can
at
least
add
people
with
like
read
access,
so
they
can
be
attributed
to
the
organ.
People
can
know
that
they
are
working
on
this
stuff.
I
think
that
would
be
great
for
sort
of
visibility
and
yeah
and.
A
A
Absolutely
I
agree,
so
I'll
dig
into
that
a
little
more
on
my
site,
just
to
see
exactly
what
all
the
the
rules
and
regulations
are
and
then
once
we
do
get
past
one
point
out
like
I,
said:
I'd
loved
it.
You
and
I
kind
of
had
dive
into
this
and
get
everything
automated
and
running
quickly.
A
C
A
And
basically
the
concept
is
that
it
will.
You
know
it's
a
panda,
that's
very
cute
and
comes
along.
It's
like
hey.
You
haven't
done
this
yet
so
it's
kind
of
a
you
know,
automated
alternative
to
us
saying:
hey
I!
Need
these
things
reviewed.
If
no
one
has
any
major
objections,
we
were
thinking
about
just
turning
it
on
for
awhile
and
and
seeing
if
we
liked
it,
it
gets
to
be
too
noisy
or
annoying.
A
Of
course
we
can
turn
it
off,
but
you
know
in
theory
it
seemed
like
it
can
be
useful
and
some
other
repos
and
part
of
people
seem
to
speak
highly
of
it.
So
yeah.
C
A
B
A
B
Say
to
speak
to
a
point
you
brought
up
previously,
which
was
where
was
it
splitting
up
splitting
up
the
repos
so
like?
We
would
have
like
you
and
docks
and
stuff?
It's
maybe
worth
writing
up
a
ticket
on
this,
because
there's
probably
a
bunch
of
stuff
that
we
could
potentially
split,
and
it's
probably
worth
the
conversation
especially
around
like
the
client
moving
the
client
SDKs.
We
may
want
to
keep
those
separate
from
the
main
projects
and
have
that
be
separate,
and
maybe
some
stuff
there.
B
A
B
Think
we
want
to
do
something.
That's
like
I
know
how
communities
has
that
thing
where
they're
like
they
have
multiple
repos,
but
it's
all
pushed
out
from
the
main
or
do
we
know?
What
like?
Is
that
been
a
pain
for
people
or
do
we
want
to
just
keep
them
in
separate
repos
and
just
have
them
separate
person?
Doesn't
mine,
I
think.
C
Kubernetes
is
mostly
doing
that
because
that's
a
stepping
stone
to
actually
moving
it
out.
Okay,
that's
that's
kind
of
like
an
intermediate
step
that
they've
been
stuck
in
now
for
probably
over
a
year,
but
the
goal
is
to
actually
split
things
out
and
again,
like
the
longer
you
wait
to
do
it,
the
harder
it
becomes
and
a
long
time,
and
so
it's
hard.
It's.
A
D
B
A
Awesome
any
any
other
questions
or
anything
about
github,
otherwise,
I'm
gonna
put
in
push
panda
and
we'll
see
if
we,
if
the
Panda
stays
or
goes
alright,
Stephen
and
Robert
have
the
rest
of
it.
Stephens
got
a
few
things,
so
you
want
to
take
it
away.
Stephen
yeah.
D
Sure,
hello
again
so
I
just
wanted
to
follow
up
on
the
last
meat
think
I
think
I.
Think
then
we
sweat
said
that
we
would
support
kubernetes
1:12,
but
actually
we're
salam
1:11
we're
running
1:12
in
production
doesn't
seem
to
make
much
difference,
but
I
think
as
all
the
cloud
providers
on
113
now
it
might
be
time
to
move
up
to
112.
B
Yeah
I
made
it
I
made
a
note
of
that
on
one
of
the
tickets
that
that's
that's
happened.
I
think
you'd
only
happen
in
the
last
few
weeks,
which
is
good,
I.
Think
it's
a
question
of
whether
how
hard
we're
pushing
for
1.0
on
the
next
release
and
if
we're
doing
that,
we're
in
the
priority
queue.
We
want
to
stick
that
there
I
think
it's
just
a
question
of
how
much
work
we
have.
We
have
time
but
yeah.
That's
that's
been
on
my
ribbon
up
too
and
I
think
I'm
ready
to
get
I.
Think.
C
C
Up
for
wound,
auto
and
do
all
of
our
you
know:
testing
and
validation
against
112
and
so
I
think.
Maybe
you
see
what
you're
saying
it's
like:
let's
slip,
all
the
CI
to
112
and
just
say
like
that
subversion
or
supporting
that's
what
you
guys
are
running
at
that's
the
default
version
gke,
which
I
think
is
true.
C
People
that
it's
a
pretty
stable
version
of
you,
Freddy's,
no
1.15
just
shipped.
So
it's
like
it's
a
couple
of
versions
old
at
this
point,
so
I
think
it's
it's
perfectly
reasonable
to
say
like
what
like,
let's,
let's
set
the
baseline
at
1.12,
at
least
for
this
next
release,
and
hopefully
for
one
motto:
move
and
look
forward
from
there.
Yeah.
E
B
The
only
thing
that
I
was
looking
at
probably
take
a
little
bit
of
time
is
I
think
at
least
on
gke.
Anyway,
we
have
a
bunch
of
new
defaults
for
112
or
different
defaults,
like
node
upgrades,
for
example,
which
we
definitely
want
to
have
turned
off.
We
probably
don't
want
nodes
or
to
auto
updating,
okay,
my
second
them
that
would
be
bad.
So
we
just
need
to
document
that
as
well
sure.
C
C
D
So
yeah
next
point
just
wanted
to
give
an
update
on
where
we
are
running
running
agonies
in
production.
So
we
Games
been
on
an
early
access
on
Google
in
a
few
countries
and
we
just
can't
get
on
their
iPhone
as
well,
just
in
Malaysia
and
Philippines
so
trying
to
get
a
lot
more
traffic
because
that's
not
early
access,
it's
just
open
to
that
open
to
those
countries
and
yes
seems
to
be
going
all
right.
D
We're
getting
about
one
matchmaking
requests
per
second
I,
don't
know
how
many
of
those
are
kind
of
retries
versus
people
actually
going
in.
But
this
is
this
is
the
point
where
you
start
to
see
we're
running
multiple
game
servers,
so
each
game
server
typically
has
up
to
50
players.
So
we
might
have
you
know
up
to
four
of
those
running
at
a
time.
D
We
start
to
see
some
of
the
scheduling
issues
that
with
a
go
nice,
but
of
course
we're
a
little
bit
behind
we're
still
on
own
I,
know,
I,
know
Oh
ten
O's
next
week.
I
think
we
might
also
try
and
get
to
11
as
that's
now
gone
by.
Is
that
a
fixes?
Some
more
about
bugs,
but
yeah
really
really
happy
so
far,
I'm
really
happy
with
the
the
support
on
slack
they've
been
getting
as
well.
So
that's
good
and
hopefully
we
can
find
some
other
weird
issues
and
creation
tickets.
For
those.
Thank.
D
B
The
annoying
thing
about
stack
driver
is
that
it
doesn't
come
with
dashboards
because
there's
no
API
for
creating
dashboards,
which
is
irritating
but
yeah.
You
can
you
just
kind
of
have
to
handle
all
your
own
and
that's
just
the
way
it
is
and
that
sucks,
but
there's
nothing.
We
could
do
about
it.
Okay,.
D
C
D
D
D
Okay
and
then
just
something
cool
we're
trying
to
do
with
that
game.
So
we
have
web
games
that,
like
anyone
can
join
so
I
might
make
her
we'll
just
find
the
first
game
server
that's
available,
so
these
are
ones
that
are
in
the
an
allocated
state
or,
if
not
it'll,
pick
ready.
One
and
tonight
allocated
and
then
start
filling
that
one
up,
but
a
new
feature.
D
We're
gonna
had
is,
like
a
start
start
your
own
game
with
friends,
so
someone
will
create
a
game
and
then
they'll
send
a
code
to
their
friends
and
the
friends
will
be
able
to
join
that
and
they'll
actually
join
the
game
server
and
it's
in
a
kind
of
Lobby
state
and
then,
when
the
the
leaders
ready,
they'll
click
go
and
then
that
game,
sir,
will
just
convert
to
like
a
regular
game
server.
So
I
think
we
can
achieve
this
just
using
our
own
labels.
D
D
C
B
C
I
think
we
could
I'm
just
wondering
if,
if
you
know
of
any
like
weird
edge
cases
in
the
code
that,
like
you
know,
some
of
the
state
machine
stuff
is
a
little
bit
too
exposed
or
hard-coded
that
we
couldn't
add
new
states.
Think
of
I
know
that's
one
of
the
sort
of
pitfalls
people
can
get
into
with
States
and
kubernetes
is
if
the
the
state
machine
itself
becomes
sort
of
too
visible
to
clients
in
a
way
that,
if
you
try
to
change
it,
it
will
break
clients,
yeah.
B
Each
state
tends
to
correspond
kind
of
to
almost
like
a
command
rather
than
an
like
an
actual
flow
from
one
to
the
other
from
within
within
okones.
It's
not
like
it.
Actually,
a
guy's
doesn't
really
expect
a
flow
per
se.
It's
more
like.
Oh,
it's
ready!
That
means
that
we
can
do
this
all
the
way
it's
allocated.
It
means
we
can't
do
this.
Oh,
it's
shut
down.
Okay,
we'll
shut
down
like
yeah
and.
C
That's
a
much
better
way
to
do
it
because,
then,
then
you
can't
add
new
states
that
now,
if
it's
in
this
state
now
we
have
new
behavior
right
yeah.
So
maybe
that's
something.
You
know
that
Steve
as
you
as
you
guys
go
forward,
is
if
it's
become
something
that
you
think
would
make
sense,
sort
of
more
broadly
or
should
be
sort
of
part
of
agonies
itself,
yeah
to
open
a
ticket
and
say
like.
We
think
that
there
might
be
this
new
sort
of.
You
know
state
that
we
want
to
add
to
the
system
yeah.
B
C
Alright,
so
I
think
I've
got
the
next
couple.
I
was
just
going
through
sort
of
the
issue
backlog
this
morning
and
trying
to
see
if
there
was
anything
we
should
discuss
during
the
meeting
the
first
one
I
think
this
is
really
important
before
we
can
call
things
one
that,
oh
and
I
know
that
mark
has
sort
of
started
working
on
this,
although
you
didn't
assign
the
issue
to
yourself,
so
I
can't
tell
if
you're
planning
on
finishing,
which
is
splitting
the
API
groups
out
so
I
saw
the
first
PR
got
merged
like
along
those.
B
The
once
there
and
now
I,
remember
where
all
that
stuff
is,
but
I
can
help
people
work
through
it
I'm
currently
focusing
on
trying
to
get
the
that
reservation
distributed
matchmaking
stuff
done
just
because
that's
been
that's
been
there
for
a
while
too.
So
if
somebody
wants
to
pick
up
some
the
other
ones,
please
go
right
ahead.
B
C
B
C
B
B
C
So
I
guess
the
the
call
fraction
there's
four
four
folks
that
are
on
the
line
or
listening
later
to
to
go
to
issue
number
703,
and
maybe
we
can
use
the
issue
to
say
like
I'm
working
on
this
part,
so
that
people
aren't
duplicating
effort.
You
know
both
trying
to
do
the
same
part
and
not
realizing.
So
if
you
want
to
help,
please
coordinate
on
that
issue
and
maybe
mark
if
you
want
to
update
the
issue
with
like
here,
the
ones
that
are
remaining
and
maybe
like
how
difficult
like
your
swag
is
at
them.
C
B
I
think
actually
the
next
one
is
is
moving
everything
from
stable
into
okones
active,
which
is
like
all
of
them.
Yes,
that's
like
the
rest
right.
I
have
got
rid
of
fleet
allocations.
There's
a
PR
in
place
right
now
to
get
rid
of
that.
So
that's
gonna
make
life
easier,
but
I
yeah
can
write
some
notes
on
there.
Okay,.
C
Well
so
the
next
thing
I
found
issue
number
613,
which
was
not
tagged
with
the
dot
twelve
milestone,
but
it
was
about
sort
of
taking
something
away
which
to
me
signals
like
this
might
be
an
issue
with
deprecation
and
compatibility.
If
we
do
it
later
so,
do
we
want
to
try
to
do
this
before
one
ATO
I
think
this
issue
is
open
like
quite
a
few
months
ago,
back
in
February
by
you
marking
and.
C
B
So
I
have
a
few
thoughts
on
this,
one
of
which
is
I'd,
be
curious.
Given
the
scheduler
changes
in
112,
this
may
be
a
lesser
issue,
so
we
could
probably
just
leave
it
I'd
be
curious
to
see
what
packing
looks
like
in
112
as
well
and
how
well
it
works
or
not
for
how
much
of
an
impact
it
has,
because
it
doesn't
have
an
impact.
C
B
But
I
think
I
think
I'd
like
to
see
what
the
performance
impacts
are
retaining
that
versus
not
retaining
it
and
see
whether
that's
a
trade-off
we
want
to
make
or
or
if
it
does
it
right
now,
like
we
do
the
pod
affinity
stuff
so
that
when
we
spin
up
games
for
reasons
in
packed
mode,
they
kind
of
go
together
as
much
as
possible
like
that.
Scheduler
is
not
perfect,
it
does
a
decent
job.
B
C
B
C
D
B
We
minted,
but
it's
only
in
the
home
section,
yeah
and
how'd
it
do
it.
I
actually
have
it
to
do
on
my
my
personal
to-do
list,
which
is
like
pull
that
out
and
put
it
in
like
the
installations.
I
would
like
to
completely
reject
the
installation
stuff.
So
it's
not
just
one
big
massive
page
of
crazy,
but
that's
yeah
questions
and
priorities.
I'd
love
to
split
that
out
her
install
type
them
her
cluster
creation
type
and
have
individual
pages
and
make
that
a
lot
simpler
to
look
at.
B
C
Okay,
so
the
next
thing
I
stuck
on
here
was
discussed
upgrades
so
I.
Think
in
my
mind,
like
that,
the
two
sort
of
really
big
things
we
need
to
do
before
we
can
sort
of
cut
a
one,
but
oh
and
say
we
have
stable
API
is-
is
make
sure
that
a
the
API
is
are
stable.
So
that's
the
thing
we
discussed
earlier
about
splitting
out
the
API
groups
and
renaming
things
to
be
one
and
committing
to
backwards
compatibility
and
two
is
having
an
upgrade
process.
C
So
with
our
sort
of
relatively
rapid
release
cadence,
here
of
being
every
six
weeks,
we
need
to
have
a
way
for
people
to
move
from
one
version
to
another,
and
so
like
Steve,
you
guys
are
sort
of
already
doing
this,
so
I'd
love
to
hear
sort
of
what
your
experiences
are
so
far.
You
know
I
think
mark
mark
wrote
up
in
one
of
the
issues
that,
like
the
safest
way
to
do.
This,
probably
is
to
create
a
new
cluster
and
to
sort
of
drain
into
the
new
cluster.
C
Is
that
something
that
you
think
would
be
reasonable?
Do
we
think
we
need
to
really
support
like
in-place
upgrades
within
the
same
cluster,
from
102
1.1
to
1.2
like
we
can
we
can
write
up
procedures?
Do
we
think
we
need
automation
before
100?
Or
can
we
add
that
afterwards
we
have
manual
processes
that
we
know
work
so
I
think
I've
been
sort
of
going
back
and
forth
about
this?
D
I
think
with
pretty
happy
would
look
you
got
now
mean
we
do.
We
do
have
basically
two
clusters.
We
have
the
current
version
and
the
previous
version,
so
you
can
well,
we
upgraded
do
the
tests
in
keyway
on
a
completely
different
environment,
but
then
we
can
at
least
on
the
previous
version
cluster.
We
can
do
the
upgrade
there
and
then
release
that
one
or
we
can
do
as
as
suggested
like
a
Bluegreen
deploy
as
well
yeah,
so
yeah
definitely
possible.
Maybe
it's
just
good
to
have
documentation.
D
C
I
guess
that's!
The
question
is
like
our
people:
okay,
with
that
sort
of
cutting
one
ATO
and
running
production
services.
If
we
don't
have
that
right
like
is
that
a
requirement
for
meadow
or
is
that
a
nice
to
have
for
a
woman
to
like
and
I
think
we
will
build
that
eventually,
all
right,
the
question
is:
are
we
okay
calling
something
one
day
without
it
and
I
think
one
thing
I
was
gonna
mention
earlier
is
like
it's.
C
It's
awesome
to
see
that
you
guys
are
running
production
service
on
a
gonna
snow,
because
one
thing
mark
and
I
talked
about
was
it's
hard
to
call
something
one
ATO
until
somebody's
running
something
in
production,
because
you
don't
know
if
it
really
works,
and
so
you
guys,
you
know,
are
sort
of
taking
some
of
the
arrows
and
doing
like
the
the
pre
one
that
over
releases
and
giving
us
the
confidence
that
you
know.
We
can
call
something
one
night
out,
because
people
are
using
it
for
real
and
so
I.
Think.
That's
that's!
C
E
B
E
C
E
E
Yeah
so
my
next
question
about
some
performance
reference
points.
If
we
have
cloud
test
faster,
so
we
can
test
and
see
some
results
against
the
same
cluster,
so
I
kind
of
interested
when
I
doing
some
change,
for
example,
some
agent
status
of
ourselves
or
others
how
it
would
impact
our
experience,
our
performance
and
this
equation.
E
B
E
B
C
We
want
to
be
able
to
run
both
post
submit
tests
like
just
like
periodic.
Like
every
half
hour,
we
run
our
tests,
make
sure
we
have.
You
know
a
signal
that
allows
us
to
detect
flakiness
in
a
better
way
than
just
when
PRS
get
created,
and
then
also
we
can
run
like
these
longer
sort
of
scalability
tests
automatically
like
we
can
say
once
a
day
run
this
this.
E
C
We
might
for
for
one
dunno
if
we
want
to
cut
it
like
six
weeks.
Maybe
we
need
to
do
some
of
these
things
manually,
but
we
do
want
to
get
that
automation
put
in
place,
my
as
April
sang
after
window
when
we
start
to
rejigger
some
stuff
with
both
prow
and
with
the
github
order
and
so
forth.
I
think
that's
gonna
be
important.
Things
get
set
up,
so
we
reduced
the
toil
of
doing
it
manually
as
we
are
cutting
releases.
What.
C
A
And
the
last
thing
that
I
wanted
to
just
I
added
in
was
we're
getting
Doc's
help.
So
especially
as
we
talk
about
you
know
stuff
going
I'm
getting
ready
for
one
point
out,
you
have
someone
who
his
name
is
Riggs.
You
may
know
him
from
he's
been
doing
a
lot
of
the
sto
Docs.
So
excuse
me,
he's
gonna
help
us
out
with
some
of
the
agonies,
Doc's
just
doing
kind
of
a
consultancy
review
and
what
we
really
need
to
get
it's
gonna
do
a
friction
blog
and
all
that
good
stuff.
A
So
we
need
so
he'll,
be
working
on
that
and
hopefully
we'll
have
something
that
we
can
look
at
I.
Think
next
week
is
what
he
said
so
keep
an
eye
out
for
some
tickets.
Maybe
that
we
can,
you
know,
kind
of
make
some
edits
and
if
you're
looking
for
some
easy
well
won't
say
easy,
but
if
you're
looking
for
some
non
code
contribution,
Doc's
is
always
something
that
will
need
it
and
the
great
thing
about
Biggs's.
He
can
kind
of
help
us
focus
on.
A
C
B
C
C
Cool
I
snuck
one
last
thing
on
the
agenda
after
your
last
thing,
April,
which
market
put
a
comment
related
to
upgrades,
is
talking
about
how
we
can
upgrade
kubernetes
underneath
the
counties
so
like
when
we
want
to
switch
from.
You
know,
1.12
to
1.13
sort
of
what
does
that
look
like
and
I
can
I
have
some
thoughts
on
this
I
think
that
so
a
if
upgrading
ygones?
If
the
thing
we
write
up
first
is
moved
to
a
new
cluster
like
then
that
also
can
handle
the
upgrades,
if
humanity's
undergone
this
right.
C
When
you
move
from
a
go
nice
one
@o
to
1.1-
and
we
said
with
1.1
we're
now
on
113
instead
of
112,
you
create
your
1.1
okones
cluster
on
13
and
move
over,
and
everything
should
be
fine.
I
think
the
tricky
part
comes
when
you
start
talking
about
the
in-place
upgrades,
both
of
a
bonus
and
of
kubernetes
underneath
okones,
and
if
we're
okay,
sort
of
punting
that
for
now
and
saying
like
this
might
work,
it
should
work,
but
the
best
practice
is
using
a
cluster
and
that's
kinda.
What
we're
going
to
support
for
1.0.
C
We
can
punt
that
a
little
ways
down
the
road
we
are
gonna
want
to
start
testing
this,
in
the
sense
that
when
you,
when
you
upgrade
kubernetes,
you
know
you're
going
to
want
to
make
sure
we
have
our
like
pod
disruption,
budgets
and
graceful
termination
periods,
all
that
sort
of
stuff
set
correctly,
so
the
node
upgrades
don't
kill
stuff,
that's
that's
being
used.
I!
C
Think
that's
where,
like
the
tricky
tricky
Ness
comes
in
there
is,
is
verifying
that
that
stuff
is
actually
working
correctly
and
on
providers
like
gke
I
know,
there
are
some
sort
of
hard
timeouts
on
those
values
like
you
can't
set
them
to
be
infinite
because
then
you'd
never
be
able
to
actually
upgrade
your
nodes.
So
we
need
to
make
sure
that
that's
those
framers
are
actually
working
for
people
where
you
know.
C
If,
if
your
game
instance
needs
to
run
for
a
really
long
period
of
time
before
you
can
safely
drain
it,
then
you
aren't
actually
going
to
be
able
to
safely
do
and
in
place.
Node
upgrade
you're
going
to
have
to
either
to
a
new
node
pool
or
a
new
cluster.
So
I
think
there
are
some
sort
of
tricky
edge
cases
there
as
we
start
to
talk
about
upgrading
the
humanized
version
underneath
again
ace.
D
C
C
Yes,
yeah
I
think
that's
the
thing.
The
risk
is,
if
you
do
it,
while
you're
serving
hitting
the
edge
cases
where
you're
accidentally
killing
things
at
the
wrong
time,
right,
yeah
I
think
we're
using
the
cluster
when
you're
not
serving
like
you
now
know.
Have
you
no
longer
have
active
game
servers
that
you're,
actually
gonna,
kill
and
so
you're?
Basically
just
rolling.
You
know
your
fresh
notes,
right
that
are
ready
for
for
the
new
version.
A
C
A
A
All
right
well
nobody's
got
anything
else,
then
we
can
wrap
it
up
for
today
we
we
are
getting
into
the
crazy
conference
time
of
the
summer.
So
just
a
simple
caveat
that
our
next
meeting
is
scheduled
for
July
25th
I
feel
like
there's
something
going
on
that
week.
But
I
don't
know
it's
not
on
my
calendar.
B
A
And
we
can
always
adjust
things,
as
you
know,
needed,
but
I
think
maybe
9:30
will
be
a
good
compromise
between
the
West
Coasters,
who
don't
like
being
up
that
early
on
I
mean
who
doesn't
want
to
stay
all
night.
A
All
right,
yeah
yeah,
so
it
sounds
like
maybe
with
July,
and
we
can
we'll
talk
about
July.
As
for,
if
we
pick
a
different
date,
we
may
do
it
the
week
before
or
week
after
or
the
week
after
would
be
August
so
but
then
again
in
August
that
week
that
we
have
scheduled
for
the
meeting
is
open
source
summit.
A
North
America,
so
it
might
change
things
up
a
bit
summer
is
always
fun
when
it
comes
to
conference
planning
and
everything
else,
but
all
send
messages
out,
and
you
know
get
all
that
on
the
calendar
for
us
for
any
changes
that
are
made
and
if
you
have
any
big
conflicts,
please
let
us
know
so
we
want
to
make
sure
we
can
accommodate
as
many
folks
as
possible,
all
right
other
than
that
happy
Thursday
have
a
good
rest
of
the
day
or
rest
of
the
evening,
depending
upon
where
you
are
and
we'll
chat
soon.
Thanks
everybody.