►
From YouTube: Agones Community Meeting April 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
thanks
for
joining
us
for
april
meeting,
it
looks
like
to
kick
things
off.
Robbie's
got
lots
of
stuff
about
the
release.
B
Yes,
so
a
couple
things
just
heads
up
just
her
normal
announcement,
the
1.14
release
candidate
was
cut
yesterday
and
the
1.14
final
release
is
planned
to
be
cut
next
tuesday,
unless
someone
brings
up
a
release
blocking
issue.
So
if
you
have
a
chance
to
test
for
this
candidate,
that
would
be
awesome.
B
I
will
also
point
out.
This
is
the
first
time
I
think
anybody
except
for
mark
has
ever
cut
a
release.
So
hopefully
everything
went
well
as
far
as
I
can
tell
everything
works,
just
fine
but
yeah.
We
are
we're
working
on
expanding
the
pool
of
people
that
are
able
to
cut
releases
so
that
it's
not
all
on
mark's
shoulders.
B
So
again,
if
you
have
a
chance
to
test
it
out,
let
us
know
if
anything
doesn't
work.
Also,
in
the
same
vein
windows
excuse
me,
windows,
game
server.
Support
is
is
marching
along
sort
of
slowly.
We
now
have
what
we
call
sort
of
alpha
support.
What
this
means
is
that
we
are
building
multi-architecture
containers
for
the
sdk
sidecar
and
also
for
the
example
game
server.
The
simple
example
game
server
and
you
can
spin
up
a
windows,
node
and
launch
a
game
server
on
a
windows
node
either
your
own
or
this
example.
B
So
I
think
there
are
definitely
some
people
who've
sort
of
poked
at
issue
number
54,
which
obviously
based
on
all
the
numbers
been
around
for
a
really
long
time
announced
about
windows,
support
and
it
is
now
starting
to
come
together.
I
will
say
that
the
the
testing
for
this
has
been
pretty
light.
So
far
like
it's
been
through
some
sort
of
manual
smoke
tests,
we
don't
have
any
ci
set
up
for
it.
B
Yet,
although
that
is
something
we
plan
to
do
in
the
future,
and
I
think
windows,
support
and
kubernetes
is
still
a
little
bit
in
flux
and
in
particular
on
gke
is
still
in
flux
and
so
expect
some
changes
coming
down
the
road,
but
it
is
definitely
now
in
a
state
where,
if
this
is
something
you're
interested
in,
you
can
sort
of
start
testing
it
out
and
giving
us
feedback.
So
we
can
improve
it
so
yeah.
So
that's
really
exciting.
Feel
free
to
go
and
give
that
a
spin.
B
If
you
try
to
do
this
on
any
older
version
of
the
onix
before
this
release
candidate,
the
game
server
can
come
up,
but
the
sidecar
will
not
come
up
it'll
just
crash
because
there
was
no
windows
version.
So
this
is
the
first
time
where
you
can
do
it
without
a
development
build.
So
that's
pretty
exciting.
B
Let's
see
okay,
so
next
I
mentioned
at
the
beginning
about
the
1.14
release
that
I
cut
this.
This
release
and
we'd
like
to
expand
the
pool
of
people,
and
I
guess
really
by
people
I
mean
googlers,
who
can
cut
releases
because
there
are
a
couple
of
steps
right
now
that
still
require
internal
google
access
and
the
the
person
that
we
would
like
to
cut.
The
next
release
is
not
currently
sort
of
a
core
contributor
maintainer
owner
of
a
goings
and
looking
at
the
the
governance
for
the
project.
B
We
only
really
have
sort
of
two
levels.
We
have
like
somebody
who
sends
vrs
and
we
have
somebody
who
is
an
owner,
an
approver,
and
that
person
has.
You
know,
responsibilities
and
access
that
to
kind
of
go
with
him,
where
you
know
they're
there
you
know,
have
to
be
approved
by
people
in
the
community.
They
have.
You
know
they're
responsible
for
emerging
prs,
et
cetera,
and
you
know.
B
I
don't
think
that
the
person
we
would
like
to
cut
the
next
release
is
going
to
hit
that
bar
before
the
next
release
comes
out
in
six
weeks.
And
so
what
I'd
like
to
propose
is
we
add
a
new
role
to
the
governance
page,
which
is
what
I
call
a
releaser,
which
is
basically
somebody
you
still
have
to
have
right
access
to
the
repository
and
so
to
do
a
release.
B
But
you
don't
have
to
be
an
owner
of
the
repository
to
do
release,
and
you
know,
I
think,
having
write
access
allows
you
to
do
things
like
merge
prs,
but
I
think
what
we
would
do,
barring
any
technical
way
to
prevent
people
from
doing
that
is
sort
of
by
sort
of
social
contract,
say
that
if
you
are
a
releaser,
not
an
approver,
thou
shalt,
not
merge,
pr's
or
otherwise.
B
We
can
take
away
your
privileges
from
being
a
releaser,
and
so
I
think
this
would
be
sort
of
a
nice
intermediate
step
where
we
don't
have
the
bar
quite
as
high,
but
still
are
able
to.
You
know:
train
people
up
to
be
able
to
cut
releases
to
share
the
load.
So
I'd
like
to
propose
that
I
will
send
a
pr,
but
I
wanted
to
bring
it
up
here
first
and
while
I
was
doing
that,
I
was
thinking
about.
B
Maybe
other
people
in
the
community
have
other
roles
that
we
think
we
should
add
that
are
kind
of,
maybe
some
intermediate
steps
to
kind
of
let
people
sort
of
more
slowly
walk
up
the
ladder
towards
being
an
approver.
B
C
Yeah,
sorry,
I
think
that's
a
good
idea
rob,
I
think,
have
we
discussed
in
the
past
sd
sdk
based
owners.
So
I
think
we've
discussed
that
in
the
past
and
that'd
be
a
good
way
to
start.
B
Yeah,
I
actually
just
added
something
to
the
bot
of
the
agenda,
asking
if
we
should
add
more
people
to
owner's
files.
So
the
difference
between,
like
in
the
governance
stock,
that
this
is
like
an
approver
with
like
access
to
merge,
prs
and
that
sort
of
stuff,
and
then
an
owner
of
an
sdk
like
steve,
is
listed
right
now
in
the
owner's
file
for
node.js.
B
D
C
B
B
Page,
like
it'd,
be
great
if
we
could
update
this
with
some
more
info
as
long
as
people
are
all
on
the
same
page,
but
I
think
we'd,
like
I'd
like
to
move
to
a
model,
also
where
you
we
have.
We
allow
like
proud
to
do
auto,
merges
if
people
in
owner's
files
approve
right.
So
then
you
wouldn't
actually
have
to
get
commit
access
to
be
able
to
approve
a
change
to
the
sdk
right.
So
then
we
can
have
different
levels
of
approvers.
Also
we're
like
you
could
be
an
owner
of
an
sdk.
B
E
A
A
Yeah,
but
I'm
talking
about
like
specifically
the
the
access
to
tools
and
things
like
that,
like
we
run
it
into
the
ck
native
and
I've
been
trying
to
actively
like
it
doesn't
make
sense.
So
I
will
point
to
prow
is
one
of
those
obnoxious
things
that
lives
in
a
at
least
for
most
of
them
that
config
lives
enough
gcp
repo,
that
not
everybody
can
access,
and
so
that
causes
problems.
E
B
E
Sweet,
I
just
wanted
to
give
a
couple
updates
on
some
of
the
things
I
was
working
on,
so
the
advanced
filtering
stuff,
I'm
working
on
that
my
work
process
is
very
much
like
do
a
lot
of
the
work
and
then
rip
it
apart
into
smaller
chunks
and
then
push
the
smaller
chunks
up
as
separate
pr's.
E
It
seems
like
it's
going.
Fine.
Thank
you
dom
for
giving
me
feedback.
I
had
a
comment
at
the
bottom
about
whether
to
have
a
single,
a
single
feature,
flag
or
two
feature
flags.
Dom
seems
too
is
fine.
It'll
come
through
as
a
pr
at
some
point
anyway,
but
thank
you.
E
If
you
have
a
chance
to
look
at
that-
and
you
agree
that
would
be
great,
but
just
letting
people
know
that
that
work
is
going
forward
once
I'm
in
a
sort
of
happy
spot,
where
I
think
it's
mostly
working
I'll
start
pulling
it
apart
and
submitting
prs
the
other
one
actually
I'll,
throw
this
your
way
as
well
pune,
because
you
know
this
stuff
super
well-
is
the
resource-based
apis
for
player
tracking.
E
I
took
an
initial
stab
at
that
around
rebuilding
that
I
haven't
done.
The
capacity
stuff,
but
just
the
the
plain
resource
stuff
to
make
sure
I
got
that
at
least
vaguely
right.
So
it
matches
stuff
so
point.
If
you
have
a
chance
to
look
at
that
and
just
be
like.
Yes,
I'm
on
the
right
track.
That
would
be
awesome
that
work
actually
doesn't
block
yeah.
E
That
work
doesn't
actually
block
any
of
the
other
player
tracking
stuff
because
none
of
the
crt
stuff
changes,
but
so
I
make
sure
I
just
wanna
make
sure
I'm
on
the
right
track
and
then
and
then
I'll
go
and
do
the
capacity
stuff
and
and
actually
work
out
how
to
make
that
stuff
actually
work,
because
that'll
be
fun
and
interesting.
I'm
actually
genuinely
curious
whether
we
need
to
upgrade
grpc
gateway
to
make
it
work.
But
maybe
we
don't
I'm
not
sure.
E
I'm
wondering
with
the
update
to
the
resource-based
apis
whether
it
will
just
be
fine,
because
it's
all
include
based
or
whether
we
need
to
upgrade
the
grpc
gateway.
What
what.
G
C
C
The
only
thing
that
doesn't
work
is
field
masks,
but
I
don't
think
you
will
be
using
any
of
those
in
a
minute.
E
Yeah,
that
should
be
fine.
If
push
comes
to
shove,
we
could
do
an
upgrade
up
to
the
latest
one
point:
something
see
how
that
goes
and
then
do
all
the
backward
compatibility
test
for
two.
It
looks
like
two
has:
what's
the
word
config
settings
to
make
sure
it's
backward
compatible
with
one
so
yeah.
E
E
I
think
those
are
my
two
things.
Just
letting
people
know
yep.
C
D
Hey
so
yeah,
I
just
said
it's
got
a
general
comment.
I
think
it's
a
recommendation
on
the
agonist
docs
to
disable
automatic
node
upgrades.
D
It's
just
something
we
think
about
trying
to
try
and
get
games
to
more
kind
of
run
themselves
in
terms
of
maintenance
and-
and
I
was
just
wondering
well
if
it's
even
technically
possible-
to
enable
automatic,
node
upgrades
and
yet
you
know
not
have
the
node
being
shut
down.
What
games
are
in
in
play,
while
they're
allocated.
D
B
It
will,
I
think,
wait
up
to
an
hour
before
cute
pedal
train
to
finish
the
equivalent
of
a
few
cuddle
drain,
and
then
I
think
it'll
wait
up
to
another
hour
for
graceful
termination.
B
Maybe
those
numbers
have
changed
since
I
last
looked,
but
you
know
I
know
we
put
the
the
I
think
it's
the
label
or
annotation
on
our
pods,
so
they
don't
get
removed
during
scale
down
like
we
prevent
the
auto
scaler
from
reaping
things,
even
if
they're
in
the
ready
state
right.
Because
those
are
your
warm
stand.
B
Bipods,
I'm
not
sure
if,
if
upgrades
will
respect
things
that
are
allocated
or
if
it
will
pick
a
node
and
say
I'm
going
to
drain
that
node,
even
if
something's
still
allocated
it's
time
to
move
forward
and
kill
it
and
go
on,
so
I
think
we
haven't
tested
it,
which
is
why
the
recommendation
is
to
turn
it
off.
So
it'd
be
worth
sort
of
testing
and
seeing
if
there's
a
way
to
make
upgrades,
do
the
right
thing.
C
B
It
says
you
know
we're
going
to
wait
a
generous
amount
of
time
and
then
we'll
kill
stuff
right,
so
both
in
the
draining
and
the
graceful
termination
like
I
said
I
know
it
used
to
be
an
hour.
It
might
be
different
now
and
but
at
some
point
there
is
a
hard
like
we
give
up
waiting
and
it's
time
to
to
kill
stuff
and
replace
the
node.
D
B
D
B
Yeah,
I
can't
remember
if
they've
implemented
surge
upgrades
so
the
equivalent
of
like
when
you,
when
you
update,
you,
know
a
deployment
or
something,
and
you
can
say
max
surge.
So
you,
you
add
new
capacity
before
deleting
right,
so
you
never
drop
below
like
your
threshold
of
like
how
many
things
you
need
to
be
serving
actively
at
the
same
time.
B
I
know
that
was
on
the
roadmap,
but
I
don't
know
if
it's
been
implemented,
but
that
would
basically
do
what
you
want
in
terms
of
adding
new
nodes
and
then
and
coordinating
the
old
ones
and
then
deleting
them,
but
that
after
you
cordon
like
the
time
between
that
and
the
deleting
needs
to
be
long
enough
for
your
allocated
game
servers
to
sort
of
finish
and
exit
cleanly,
and
so
that
might
depend
partly
on
your
game
like
if
you
have
really
short
game
sessions.
Where
they're
you
know
five
minutes
and
you
can.
B
You
can
give
your
make
sure
you
give
yourself
that
whole
hour,
then
there's
plenty
of
time.
But
if
you
have
like
a
longer-lived
game
session
that
runs
for
hours
or
even
days,
then
you
know
no
matter
what
we
do
in
terms
of
coordinating
we're
always
going
to
kill
that
thing.
While
it's
still
being
used.
E
Yeah,
there's
nothing,
that's
super
clear
in
the
docs
as
far
as
I
can
tell
in
gka
at
least
about
what
does
what
does
drain
mean
and
what
does
that
actually
do
and
how
to
if
there's,
even
a
way
to
test
that
changing
search
upgrades
allows
you
to
change
the
number
of
nodes
upgrades
one
time.
B
Yeah,
I'm
pretty
sure
I
don't
know
how
much
there's
on
the
dot.
I'm
pretty
sure
that
eric
tune
and
messim
gave
a
presentation
at
kubecon
a
couple
years
ago
about
note
upgrades
so
again
like
that's
where
I
think
a
lot
of
my
knowledge
comes
from
is
sort
of
that
time
frame
and
it
may
have
changed
since
then.
But
I
know
that,
like
the
node
upgrades
were
designed
to
try
to
not
kill
workloads,
but
again
they
don't
know
anything
about
going.
It's
game,
server,
states
and
allocated
versus
not
allocated.
B
So
we
work
hard
to
make
sure
like
that.
We
inform
the
auto
scaler
not
to
kill
things
even
if
they
don't
seem
to
be
doing
anything
at
the
moment.
But
I
don't
know
if
there's
a
similar
indication
you
can
give
to
the
node
operator
so
yeah.
I
agree
with
mark
we
should
we
should
chat
with
the
gke
team.
I'm.
D
D
D
B
The
auto
scaler
code
is
is
only
looking
at
things
inside
of
kubernetes
in
terms
of
like
the
workloads
in
terms
of
scaling
up
and
down
right.
It
obviously
still
interacts
with
the
underlying
platform
in
terms
of
creating
virtual
machines,
but
it
gets
all
of
its
signals
and
cues
from
from
kubernetes
right
and
the
node
upgrade
process
can
definitely
vary
from
provider
to
provider.
B
If
it's
done
well,
it
will
also
inspect
things
about
pods
running
on
nodes
and
sort
of
try
to
do
it
safely,
but
there's
no
guarantee
that
that's
the
case
and
also,
even
if
it
does
that,
it's
not
clear
what
signals
we
could
give
it
to
say
like
hey
this
one's
really
not
ready.
Yet,
please
wait
longer.
E
E
Exciting
to
talk
about
do
we
one
other
thought
I
had
is
if
it's
worth
the
time
doing,
like
a
look
at
like
some
of
the
really
old
tickets
to
see.
If
there's
anything
that
should
be
closed,
I
added
a
label
a
while
ago,
for
we
could.
We
could
there's
a
there's,
probably
an
agenda
item
for.
Do
we
want
to
put
in
place
some
kind
of
bot
that
does
like
a
90
day
or
a
some
kind.
C
E
Failing
that,
we
could
go
and
watch
mccall
go
back
through
and
say
like
okay,
this
is
gonna
get.
Should
we
close
this
or
is
there
stuff
that
should
get
closed.
E
E
B
April
opened
an
issue
in
april
of
2019
so
two
years
ago
now
that
says
we
should
add
an
about
page.
That
explains
why
we
made
it
go
nice.
I
think
we
yeah.
E
B
A
E
B
E
A
E
E
F
There
we
go
mark
if
the
time
lows
can
we
discuss
deprecating
the
meta
patch
and
replacing
it
with
metadata.
F
Yeah
so
one
of
the
concerns
that
I
have
so
that
was
proposed
by
you
and
because
it
makes
customers
confused
on
why
we
use
meta
patch
in
one
api
and
metadata
in
the
other
api,
but
deprecation
and
moving
to
a
new
field,
also
kind
of
confusing
for
the
purpose
of
naming.
So
there
is
an
engineer
working
on
this,
but
I
was
wondering
if
it's
really
worth
the
time
to
do
this
deprecation.
E
F
There
is
no
metric,
we
have
that
say
that
okay,
nobody
is
using
this
field
anymore.
Then
we
should
either
new
introduce
a
new
version
of
the
api
in
future
for
removing
that
field
or
like
kind
of
send
the
survey
to
see
if
anybody
is
using
that.
E
E
F
And
so
this
is
basically
the
json
naming
of
the
game.
Server
allocation,
yeah,
yeah
yeah,
so
still
for
the
for
the
resource
base,
the
server
extension
api,
it's
still
named
as
meta
patch.
In
the
code,
we
just
use
a
different
json
name.
E
Not
an
unreasonable
question,
I
guess
I
would
we
could
change
the
code.
I
mean
look.
We
have
precedent
before
of
changing
the
go
api
as
as
needed,
so
we
could.
I
just
figured
there
are
more
people
using
the
crd
interface
as
an
actual
crd,
but
I
that's
like
that's
that's
sort
of
why
I
wrote
in
the
ticket,
like
I
probably
don't
think
we
need
to,
but
I
can
also
see
how
it'd
be
confusing.
E
F
So
I
wanted
to
make
sure
that
we
still
not
sure
that
we
want
to
deprecate
the
meta
patch,
because
after
that
came
up,
I
wasn't
sure
whether
we
want
to
reconsider
deprecating
or
not
in
the
grpc
api.
E
Okay
yeah,
the
crd
is,
is
fine
because
it
says
metadata,
but
it's
the
go
code
that
I
can
see
where
that.
C
F
E
E
B
A
B
E
E
B
Yes,
I
did
close
one
thing.
I
signed
an
issue
to
april
about
yep
intake
for
security
vulnerabilities
and
realize
that
I
had
recently
done
that,
and
so
I
linked
it.
E
To
the
pr
and
it
closed
suits,
I
marked
this
one
at
still
to
see
if
anyone
disagrees
but
we're
talking
about
maybe
removing
pod
affinity
from
pods
under
pack
scheduling.
But
since
then
the
kubernetes
scheduler
has
got
a
lot
better
about
this
stuff.
So
I
don't
think
it's
really
a
concern,
so
I
think
we
can
at
least
leave
it
on.
So
I
just
I've
marked
these
as
stale
and
see
if
anyone
just
like
greatly
disagrees.
E
What's
this
about
page
shenanigans,
go
on
we'll
just
disable,
we
should
have
a
page
that
explained
who
founded
the
gun.
It
works
on
it.
Why
we
made
it
et
cetera,
that's
the
one
I
was
mentioning
earlier.
Oh
there
we
go.
B
A
E
B
E
E
So
this
was
an
issue
where
I
was
like.
Maybe
it'd
be
really
nice
to
just
grab
the
entire
node
address
array
from
the
node
that
the
game
server
is
on
rather
than
like
do
the
intelligent
like
hey?
Is
it
one
of
the
public
ip?
Is
this
the
dns
address,
etc,
etc?
But
it
seems
like
people
don't
seem
to
care
too
much.
We've
had
some
adjustments
to
that
algorithm
about
which
ones
we
grab
first,
but
people
seem
fine.
So
I
think
it's
been
open
for
two.
E
A
E
E
C
Yeah
conformance
test
will
be
harder
than
real,
I
think,
but
I
can
have
a
look.
What
the
performance
tests
currently
do.
C
E
B
So
I
think
what
I
was
saying
in
my
comment
was:
we
could
put
basically
a
file
that
is
a
no
op
there,
because
that
would
get
rid
of
the
warning
that
gets
printed
like
if
you,
if
you
run
some
of
the
different
make
commands
like
a
bunch
of
stuff
spews
by
and
it's
hard
to
tell
if
that
stuff
is
okay
or
not,
and
this
stuff,
when
it
prints
out
like
error,
can't
find
this
file
you're
like
oh,
no
like.
Why
is
the
file
not
there?
B
Even
though
it's
it's
not
there,
because
there's
there's
no
reason
that
it
needs
to
be
there.
It
still
sort
of
prints,
an
error
that
we
then
swallow,
because
it's
okay
and
move
along
with
the
build.
F
E
B
F
B
E
E
Yeah
yeah.
E
E
E
That's
probably
a
good
one
to
keep.
Did
this
get
solved,
you
know
I'm
running
on
eks.
B
C
D
E
G
E
B
E
Document
how
to
use
informers
and
listers
that's
legit.
B
I
think
that
would
be
a
nice
thing.
It
would
also
help
for
if
you
wanted
to
update
the
node
pool,
the
controller
is
running
in
like
we
talked
about
updating
the
node
for
the
game.
Servers.
E
B
But
you
should
be
able
to
update
the
node
pool
the
controller
is
running
in
and
the
way
to
do
that
would
be.
If
it's
a
size
one
you
do
a
max
surge
of
size,
one
you
create
a
new
node,
you
add,
add
a
new
pod
of
this
type
and
then
you
can
spin
down
the
old
one
and
you
have
no
downtime
yeah.
That
would
be
really
nice
fun
to
do.
I
think
we
should
leave
that
open
and
give
it
more
thumbs
up.
I'm
gonna
go
film,
something.
E
E
D
C
Do
I
remember
we
had
a
discussion
previously
around
pulling
large
containers?
Did
we
ever
was
that
here?
I
think
that
was
here
quite
check.
The.
E
E
This
is
killing
me,
I
swear.
There
was
like
a
whole.
We
wrote
a
whole
guide
about
like
how,
if
you
do
an
edit
to
a
fleet
it
has
to
like
not
have
the
the
fleet
replica
account
in
it.
If
you're
doing,
updates
create
strategy,
tumor
fleets
gives
your
allocation
of
crossfades
really
right.
Yeah,
that's
got
updated,
and
now
I
can't
remember
where
that
is.
E
E
What
I
think
I'd
like
to
do
long
term
is
take
this
and
write
a
whole
section,
which
is
like
different
different,
like
scenarios
for
allocation
and
and
like
fleet
scale
up
and
down
and
like
have
all
kinds
of
different
scenarios
so
that
people
can
see
different
ways
of
doing
things
like
everything
from
like
canary
testing
to
like
hey,
I
want
to
run
a
persistent
world.
How
do
I
do
that?
You
know
with
this
thing
or
like
all
kinds
of
different
scenarios,
we
can
just
flesh
that
out.
So
let's
leave
that
open.
E
That's
a
that's
a
good
one!
Actually,
though,
on
this
one
saying
best,
practices
for
game
server,
allocation,
pune,
you're
wrote
that
one
thing
we
do
have
now
is
on
the
allocator
service.
We
have
this
section
game
server
allocation
versus
allocation
service.
Is
that
enough,
or
I
could
just
write
something
in
the
top
in
the
ticket
we
have.
F
Well,
as
you
said,
the
earlier
document
you
opened
yeah.
E
F
Had
a
couple
of
like
different
like
game
servers,
self
allocation,
the
game.
C
F
Allocation-
and
I
think
that
section
you
added
also
could
could
be
a
good
match
for
this
page
as
well
yeah
to
have
the
full
picture.
Yeah.
B
F
That
section
you
added
to
the
comparison
between
the
game,
server
allocation
and
a
goodness
allocator.
I
think
that's
also
with
address
my
comment.
E
E
D
E
Pull
panda,
I
think
we
can
close
this
now,
because
I'm
pretty
sure
this
is
done,
yeah,
no,
we're
not
getting
pulled.
Panda
requests
in
here,
where
all.
G
E
G
E
E
You
know
there's
probably
this
is
this
keeps
coming
up
where,
like
people
like
every
time,
we
add
a
new
development
developer,
that
we
need
to
add
a
new
namespace
for
that
developer.
Let
me
just
update
it.
I
wonder
if
there's
just
a
little
third-party
project,
that
someone
could
write.
That
just
does
this
somehow,
like
just
just
adds,
you
know,
anytime,
you
add
a
new
namespace,
it
automatically
creates
the
artback
rules
and
you
can
just
have
it
as
a
small
like
it's
just
as
a
small
little
third-party
project.
B
E
E
A
B
A
Okay,
well
thanks
for
joining
us
and
well
god,
I
can't
believe
it's
already
end
of
april,
so
I
guess
we'll
catch
up
with
everyone
next
month.