►
From YouTube: Kubernetes Community Meeting 20180621
Description
This is our weekly community meeting, for more information check this page: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
B
Okay,
go
ahead
all
right!
Well!
Is
there
somebody
available
to
record
the
meeting
able
to
host
okay
wait?
We
got
some
clear
of
perfect
okay.
This
is
a
community
meeting
open
areas,
community
meeting
on
June,
21st
2018.
It
will
be
posted
publicly
on
YouTube,
so
please
be
mindful
of
what
you
say
is
being
recorded,
even
what
you
eat.
Even
your
mantras
and
your
clothes
are
all
recorded
to
be.
Please
be
mindful
of
that.
B
My
name
is
Erin
Gupta
I
work
for
Amazon
I'm,
an
officer
I
work
in
the
open
source
team
here
and
I
work
very
closely
with
the
Amazon
eks
team
over
here.
So
please
be
mindful
well
I
also
I
also
work
closely
in
the
sig
AWS
that
birds
have
I've
been
spending
most
of
my
time,
so
anything
around
Cuba
night
is
an
AWS
I'm
happy
to
help
you
out
for
everybody
else.
You
know
if
you're,
not
speaking,
please
be
stated,
you
know
it's
a
simple
button
on
the
bottom
left
corner
of
his
room
console.
B
B
A
Howdy,
so
you
hear
me:
okay,
yeah,
okay,
so
we've
gone
from
I
everything
being
green
in
the
last
week
to
everything
thing
being
not
so
green
right.
Now,
the
let's
go
over
what
just
happened.
So
what
happened
this
week
was
we
code
thought
on
Tuesday
looked
in
code?
Freeze
the
pending,
submit
queues
from
non
111
features
should
now
be
cleared
I
at
this
point.
So
if
your
stuff
is
don't
view
it
for
some
other
reason,
the
we
cut
our
c1
yesterday.
A
So
please
test
people
can
install
that
using
cubed
min
I.
If
you
want
to
go
ahead
and
test
that,
unfortunately,
given
the
number
of
issues
that
have
cropped
up
this
week,
release
status
is
currently
uncertain.
I
give
it
about
a
50/50
chance
that
we're
going
to
call
for
a
delay
will
make
a
final
decision
at
the
burndown
meeting
10:00
a.m.
tomorrow
morning.
The
main
issues
are
CI
signal
issues.
A
A
However,
any
of
those
issues
and
a
couple
of
other
reasons,
things
that
have
come
up
with,
say
Courtney
and
s
could
prove
to
be
more
pernicious
than
expected,
and
if
so,
we
will
be
scheduling
the
release,
delay
and
and
again,
look
for
an
announcement
from
me
tomorrow
on
a
potential
release
delay
the
the
other.
The
other
fun
happy
thing
is
that
the
release
notes
collector.
That
is
the
piece
of
code
that
goes
through
and
harvest
release
notes
from
all
of
the
PRS
broke
about
two
weeks
ago.
A
Something
like
that
two
and
a
half
weeks
ago,
we
have
not
me
it
actually
turns
out,
has
always
had
a
problem
with
missing
some
release
notes
and
for
some
reason,
the
one
eleven
cycle
that
problem
is
worse.
We
still
don't
know,
what's
causing
it
so
I'm
asking
anybody
who
did
commit
something
in
111
that
required
to
release.
Note,
please
double-check,
that
that
is
represented
in
the
current
draft
release,
notes,
I
and
if
you
do
find
something,
that's
missing,
contact
Nick
chase
to
have
it
added
based
on
doing
some
sampling.
A
We
estimate
that
there
are
probably
still
somewhere
around
two
dozen
release,
notes
that
we
ought
to
have
in
the
release,
notes
and
are
not
there,
the,
and
that
is
all
the
fun
from
one
eleven
release
land.
So
more
news
tomorrow
about
whether
or
not
we're
actually
releasing
on
June
26.
Thank
you,
Thank
You
Josh.
This.
B
C
Much
to
say
other
than
the
I've
just
gotten
and
about
to
make
a
PR
on
the
last
roll
for
the
release
team
there,
and
then
we
were
having
some
discussions
around
the
schedule,
specifics
and
the
timeline.
But
of
course
that
depends
on
111,
because
if
111
slips
will
probably
be
conservative
and
let
it
push
into
sort
of
mid-july,
because
the
fourth
of
July
week
is
problematic,
so
a
two-week
slip.
There
becomes
something
that
has
implications
on
the
112
schedule,
but
well
we'll
cross
that
bridge
over
the
next
week
or
two.
B
B
D
B
D
B
B
Fix
it
yeah,
okay,
maybe
we'll
come
back
to
mark.
You
know
one
more
time.
You
know,
let's
see
what's
next
in
the
agenda,
so
if
I
go
here
cap
out
the
week,
so
for
those
who
don't
know
you
know,
there's
a
kubernetes
or
io
/
Docs
imported
/,
community,
/
caps,
that's
where
they
talk
about
caps.
I
just
want
to
quickly
read
like
what
is
the
purpose
of
the
cap
process
and
I
think
it
very
clearly
summarizes
it.
B
So
it
says
the
purpose
of
the
cap
process,
which
is
basically
kubernetes
enhancement
process,
is
to
reduce
the
amount
of
tribal
knowledge
in
our
community
by
moving
decisions
from
a
spattering
of
mailing
lists,
video
calls
and
hallway
conversations
into
a
well
track.
Artifact.
This
process
aims
to
enhance
communication
and
discoverability
and,
as
all
the
more
relevant,
particularly
given
you
know
how
widely
distributed
the
kubernetes
community
is.
So
that's
essentially
what
a
cap
is.
You
see.
Are
you
there
to
talk
about
the
cap
of
the
week.
E
Can
see
my
screen?
Yes,
we
can
okay,
so
the
the
name
say
population
is
a
proposal
which
was
originally
part
of
the
security
profile
proposal
and
people
think
this
is
very
generic
and
can
be
useful.
So
that
comes
this
standalone
proposal
for
the
name
say
publishing.
Actually
the
there's
a
lot
of
details
and
complications
behind
that
so,
depending
on
the
limit,
I'm
I'm,
starting
from
the
high
level,
if
I
have
time,
I
can
dribble
down
a
bit.
So
first
the
prom,
the
names
we
polishing
going
to
solve.
E
Is
you
tried
to
provide
a
mechanism
the
people
can,
especially
in
a
multi-tenant
environment.
People,
can
use
purely
ionic
abilities,
API
and
client
tools
to
create
a
namespace.
Well,
the
class
enemy
is
they're
able
to
automatically
populate
the
party
object
into
the
newly
creating
spaces
without
asking
the
crate
to
do
additional
work
to
crazies
party
objects,
and
these
objects
can
be
automated
enforced
and
changed
and
updated
up
even
for
names
who's
already
created.
When
the
class
I
mean
going
to
change
the
policy,
they
are
automatic
populate
into
all
the
exist
existing
namespaces.
E
So
behind
this
idea,
there's
a
few
challenges,
so
the
the
first
thing
is
when
a
user
one
creating
space,
we
need
to
make
sure
during
the
creation
of
namespace
there's,
no
other
users
or
even
the
Creator,
has
a
privilege
to
do
something
else
like
to
create
ports
before
all
policy
objects
are
populated.
So
this
is
the
first
problem
about
the
secure
security.
E
E
So
here
so
make
it
in
a
secure
way.
So
first
we
only
grant
the
create
permission
to
the
user,
and
once
the
name
is
being
created,
the
the
mecanim
will
introduce
another
Ciardi
called
names
with
template
which
define
our
list
of
policy
objects
to
be
created
into
the
namespace,
and
the
last
object
will
be
a
row
binding
which
grant
the
user
additional
permissions
in
the
namespace.
So
once.
E
Then
the
users
are
able
to
move
forward
to
create
additional
objects
or
great
pause
in
sending
this
race,
while
the
other
party
obvious
are
already
created,
like
the
network
policy
or
the
the
row
bindings
for
default
service
account
to
use
a
post-acute
policy,
so
everything
becomes
a
secure
at
that
moment.
So
this
is
the
basic
idea
behind
this
mechanism,
so
on
the
hood,
there's
actually
two
components
involved.
E
E
There's
one
bigger
prong
during
that
process
is
when
the
controller
gets
notification
about
a
new
lake.
Read
namespace:
it
has
no
idea
about
who
current
namespace
and
that
called
trouble
to
grant
the
final
permission
to
the
crater.
So
that
part
is
basically
solved
by
the
mutating
animation
webhook.
When
a
Chris
namespace
the
web
hook,
will
annotate
the
namespace
you
with
you
the
name
of
a
crater,
then
the
the
the
controller
will
be
able
to
substitute
the
resource
template
to
create
a
row
pending.
E
And
the
last
step
is
about
the
initialization,
so
so
to
be
comparable
with
existing
clients
and
and
client
programs
and
scripts.
When
a
namespace
is
being
created.
We
don't
want
that.
The
API
call
the
Creator
names
for
API
call
to
return
before
the
namespace
is
fully
initialized.
So
we
are
utilizing
the
initializer
mechanism
to
puts
the
name
a
control
name
into
the
metadata
for
initializes
pending
list
and
the
basically
API
seller
will
hold
on
the
request
for
a
while
until
the
the
controller
for
is
initializes
the
namespace.
E
E
Object,
there's
a
source
behind
the
this
again
like
the
extending
initializers.
So
currently,
the
mutating
webhook
only
puts
the
names
of
a
template
controller
as
initializer
and
can
can
be
extended
to
support
additional
initial
to
full
namespace,
the
mechanic
quite
similar
to
the
existing
initializers
at
the
admission
control,
but
which
is
going
to
be
obsolete
yet
so
the
proposal
is
basically
learning
from
this
MacKinnon,
but
proposing
a
more
specific
names
base
initialization
mechanism
and
going
back
to
the
names
of
a
template.
E
So
so
there's
a
few
limits
on
the
template
and
we
can
see
in
future.
We
can
think
about
additional
mechanism
like
to
extend
the
source
of
template
definition
from
other
places
like,
for
example,
ie,
a
template,
a
real
name
space
with
all
the
parts
of
its
this
autostop
scope
of
this,
so
so
I
think
I
have
the
demonstrate
all
the
idea
behind
this
proposal
and
if
you
have
any
questions
or
have
any
comments,
please
feel
free
to
go
to
this
ke
p
PR
and
put
comments
there.
So
the
PR
number.
B
B
I'm
gonna
share
my
desktop
here.
There
is
a
kept
tracking
board
now
this
isn't
looking
at
the
meeting
topics
here.
So
if
you
go
here
and
you
look
at
the
kept
tracking
there,
this
is
a
github
project,
essentially
part
of
kubernetes
community,
and
this
is
a
nice
trail
of
bullets.
In
case
you
want
to
look
at,
but
it's
sort
of
the
status
for
different
caps.
You
can
easily
track
that.
This
is
definitely
what's
a
look.
B
D
D
D
B
B
F
Here,
okay,
take
it
away,
let's
see
if
I
can
share
my
screen.
This
is
the
kiss
of
death
for
presenters
today,
so
we'll
see,
are
you
able
to
see
that?
Yes,
I
can
excellent?
How
about
that
right?
So
I'm
gonna
give
a
very
quick
update
about
the
status
of
the
cig,
big
data's
work
topics,
I'm
going
to
cover
our
spark,
which
it's
been,
the
bulk
of
work
airflow,
which
is
a
workload
scheduler
and
runner.
F
That's
very
popular
and
HDFS
running
on
kubernetes,
which
is
needed
to
complete
story
and
then
couple
final
thoughts,
so
main
status,
one
of
the
big
efforts
was
to
get
Apache
spark
working
natively
on
kubernetes.
This
makes
Apache
spark
able
to
have
a
three
different
scheduler
backends,
so
it
had
meso
s--,
originally
yarn,
and
now
it
has
kubernetes
as
the
third
major
back-end
scheduler.
This
was
incubated
for
over
a
year
in
a
side
for
Creepo
and
then
in
spark
2.3.
F
It
was
merged
into
the
main
spark
repo
and
is
one
of
the
marquee
features
of
the
sparked
2.3
release.
So
this
was
very
exciting
for
everyone
involved
and
it
was
quite
a
multi
organization
effort
between
google,
pepper
data
and
my
former
company
Palin
Teeter
Bloomberg,
a
bunch
of
different
organizations
and
several
places
are
running
this
in
production.
Anirudh
and
I
gave
a
talk
on
this
at
the
SPARC
summit
a
couple
weeks
ago,
and
it
was
a
gigantic
packed
room
talk.
F
There
were
well
over
500
people
in
attendance
and
they
were
kicking
people
out
the
door
for
fire
marshal
reasons.
So
this
was
this
is
a
very
popular
new
possibility
of
the
way
to
run
spark
quick,
look
at
what
already
released
in
spark
three
kind
of
the
basics:
cluster
mode
sequel,
a
bunch
of
modes
of
using
spark
are
already
released
in
particular,
and
also
the
ways
to
build
spark
containers,
because
this
is
the
only
mode
of
spark
in
which
it's
very
common
or,
in
fact
required
that
you
use
spark
packages
in
a
container.
F
So
there
is
new
tooling
at
it
to
start
to
do
that
spark
2.4
the
stuff
that's
getting
worked
on
right
now.
Pi
spark
is
a
frequently
asked
a
very
frequently
demanded
feature.
So
that's
getting
worked
on.
Spark
has
a
mode
called
dynamic
allocation
which
allows
it
to
scale
up
and
down.
Its
executor
is
based
on
the
workload
there's
some
features
to
ease
the
running
of
notebooks,
although
it's
already
possible
to
run
spark
in
notebooks
very
well,
we're
gonna,
publish
official
container
images
of
spark.
So
typically,
this
is
a
new
thing.
F
The
Apache
Software
Foundation
does
not
regularly
publish
container.
You
know
official
container
images,
it's
up
to
the
user
to
build
their
own,
but
they're
actually
gonna
be
we're.
Gonna,
be
publishing
official
images
and
we're
also
be
adding
our
support
into
this
part.
Just
like
Python
support.
Okay,
I'm
gonna
talk
a
little
bit.
One
slide
on
air
flow,
so
air
flow
is
a
workload
scheduler,
not
a
schedule
of
this.
A
senseless,
kubernetes
scheduler.
It's
a
thing
like
I
want
to
run
this
repeated
workload
which
has
these
17
steps
and
parallelism
of
these
wood
methods
every
day.
F
F
You
know
previously,
people
would
run
executor.
Zon
stand-alone
hosts-
and
in
this
you
know
using
the
kubernetes
executor,
the
executor
runs
in
a
pod,
also
the
kubernetes
operator,
which
is
kind
of
the
converse
of
that.
It
means
that
you
can
have
an
airflow
step,
which
says
I
want
to
run
this
kubernetes
pod
right.
So
it's
not
executive
running
in
it's.
The
actual
action
is
run
that
pod.
F
F
Last
project
is
HDFS,
so
HDFS
there's
in
a
development
repo
there's
a
helm
chart
which
will
give
you
a
fully
installed
HDFS
set
up
on
a
cornet
II
system.
I
should
mention
that
the
spark
you
know
one
of
the
purposes
of
having
HDFS
is
so
that
you
can
store
persistent,
you
know
being
distributed
data
on
kubernetes.
The
spark
work
of
course
works
with
this
or
would
work
off
of
other
sources
of
data.
Like
you
know,
cloud
storage
systems
or
external
HDFS
systems
or
anything
you've
got
for
the
on
kubernetes
HDFS
work.
F
It
does
have
the
very
important
in
some
networking,
setups
locality,
optimization
fully
works
with
the
smart
code.
There
is
a
chain
aim,
node
support
and
I.
Think
that's
automatically
set
up
with
the
helm,
charts,
also
Kerberos
support,
which
is
the
very
common
security
methodology
used
for
HDFS.
This
is
not
yet
in
any
kind
of
release
candidate.
It's
hopefully
gonna,
be
an
alpha
release.
F
Maybe
this
month
maybe
next
last
thoughts.
One
big
need
which
is
getting
worked
on
and
mainly
the
core
SIG's
scheduling
for
kubernetes
is
to
get
parity
with
spark
on
yarn.
The
main
thing
here,
as
I
mentioned
scheduling,
yarn,
has
all
kinds
of
fancy
features
dealing
with
multi-tenant
and
scheduling
related
things,
and
that
I
talked
to
several
organizations
after
the
SPARC
summit,
who
you
know
are
like
telcos
and
and
banks
who
are
serious
users
of
big
data
and
they
are
interested
in
switching
some
or
part.
F
You
know
major
parts
of
their
setup
from
using
yarn
to
kubernetes,
which
is
a
really
big
deal,
but
the
scheduling
sophistication
is
a
key
blocker
for
them,
so
that
it's
good
that
this
is
getting
worked
on.
The
other
thing
that
we
really
need
is
once
we
have
that
official
airflow
release
with
the
spark.
This
will
make
it
extremely
compelling
demo
for
this.
For
this
whole
technology,
because
airflow
is
a
very
popular
thing
and
you
can
make
you
know
real
setups
that
use
the
combination
of
airflow
and
spark
that's.
B
G
G
P.M.
sure
can
you
Ennio
my
screen.
B
F
B
D
B
B
D
Perfect
hi
everyone
I'm
mark
Mandell,
I'm
tethering
on
my
phone,
so
this
should
be
hilarious.
I'm,
a
developer
advocate
for
Google
cloud
and
I'm
gonna
talk
to
you
about
iguanas,
which
is
a
scaling
multiplayer
kiddie
game
servers
on
top
of
kubernetes,
so
I
want
to
talk
about
a
very
particular
type
of
game,
so
the
type
of
game
I'm
talking
about
is
sort
of
your
online
fast-paced
sort
of
your
FPS,
your
mmo's,
so
you're
thinking
or
overwatch
or
unreal
tournaments
those
sort
of
games.
These
sort
of
games
are
very
high.
Paced
and
latency.
D
Here
is
a
really
big
deal.
10
milliseconds,
20
milliseconds,
either
way
is
a
big,
is
a
big
problem.
So
what
I
want
to
do
here
today
is
actually
use
a
little
open-source,
FPS
shooter
called
Sonic
to
use
a
demo.
Today
you
can
quote
graviton
select
org
if
you've
ever
played
Unreal
Tournament,
that's
sort
of
the
game
that
is
in
here
and
has
a
little
good
stuff
like
Deathmatch
and
capture
the
flag
and
all
that
kind
of
stuff.
D
But
when
we
talk
about
these
sort
of
fast
placed
multiplayer
games,
one
of
the
most
prevalent
ways
of
hosting
and
running,
these
is
what's
commonly
referred
to
as
a
dedicated
game
server.
So
I
just
want
to
set
some
like
base
knowledge,
so
we
all
understand
the
same
things
so
dedicated
games.
Vivir
is
actually
a
full
in-memory
simulation
of
the
game
that
runs
somewhere
in
the
cloud
and
all
of
the
players
then
connect
to
that
dedicated
game
server.
D
Usually,
then,
they
send
in
sort
of
things
like
hey
I,
want
to
like
run
forward
or
find
my
Rockets
or
like
through
this
bowl,
and
they
send
that
motive.
That
input
up
to
the
dedicated
game,
server
and
the
game
server
then
determines
what
is
the
actual
true
state
of
what's
going
on
and
then
basically
sends
down
to
the
rest
hey.
This
is
actually
what
happened
inside
this
game.
D
There's
two
very
good
reasons
for
doing
this,
one
of
which
we're
talking
about
latency
so
being
able
to
control
the
geographic
location
of
where
these
dedicated
game
servers
sit
is
very
important
because
the
closer
it
is
physically,
the
less
latency
you
will
have.
The
other
fun
thing
is
that
people
are
horrible
and
if
you
run
these
dedicated
game
servers
on
your
own
network,
they're
much
harder
to
hack
and
it's
much
harder
to
cheat
at
these
sort
of
games.
There
are
other
ways
of
doing
it,
but
dedicated
game
servers
are
definitely
becoming
before
so
traditional
architecture.
D
D
Once
the
matchmaker
has
a
group
of
people,
it's
going
to
talk
to
some
kind
of
game
server
manager
and
that
game
server
manager,
usually
then
talks
to
a
cluster
of
machines,
and
it's
gonna,
say:
okay,
cool
I
needed
dedicated
game
server
process
so
that
simulation
server
running
somewhere
on
this
cluster
machines
and
generally
it'll
grab
an
IP
import
from
that
process
that
has
started,
and
so
it
gets.
D
A
free
IP
passes
the
IP,
an
import
for
that
dedicated
game
to
ever
back
up
to
those
players,
and
those
players
will
make
a
direct
connection
to
those
processes
right.
This
is
in
memory
simulation,
so
you
want
to
have
them
all
on
the
same
machine,
so
it
can
process
that
in
memory
for
speed
and
also
load
balancers
ad
latency,
so
that's
bad
too.
So
we
we
do
do
direct
connections,
and
that
becomes
very
important
here
as
well.
So
Agana
is
is
really
this
part
here.
D
It's
the
hosting
and
scaling
of
dedicated
game
servers
and
treating
them
in
a
way
that
makes
sense
for
dedicated
game
servers
because
they're
they
do
have
in-memory
state.
That
is,
that
is,
but
they
come
up
in
unordered
ways.
So
stateful
sets
some
pods,
don't
really
work
for
either
of
these
workloads.
D
So
gonna
is
it's
designed
to
be
a
batteries,
included,
open
source,
dedicated
game,
server,
hosting
and
scaling
project.
Currently
in
alpha,
we
are
heading
towards
0.3.
Right
now
been
working
on
this
in
conjunction
with
Ubisoft,
since
about
November.
Last
year
we
co-founded
it
I
guess
in
some
ways
and
they've
been
really
really
great.
They've
got
a
huge
history
of
running
large-scale,
dedicated
game,
server,
workloads
and
big
multiplayer
games.
D
So
it's
been
really
if
I'm
working
with
them
I
do
want
to
talk
a
little
bit
about
why
cribben
Eddie's
and
why
this
is
what
kubernetes
isn't
so
important.
I
mean
kubernetes
is
kind
of
amazing
the
we
are
essentially
an
operator.
We
have
customer
resource,
definitions
and
controllers
and
we
do
extinct,
kubernetes
and
that's
been
hugely
hugely
beneficial
to
enable
us
to
move
really
quickly.
With
this,
it's
been
kind
of
amazing.
The
stuff
we've
been
able
to
get
done,
but
the
other
thing
is
the
abstraction
layer
the
communities
gets
for
games.
D
Latency
is
key
so
being
able
to
be
able
to
put
servers
in
very
particular
places
around
the
world
and
having
a
lot
of
control
and
flexibility,
for
that
is
paramount
for
these
types
of
games.
So
maybe
you
want
to
run
on
the
cloud.
You
want
to
run
your
own
dedicated
game
servers,
or
maybe
you
just
want
to
put
it
in
some
random
place
in
the
middle
of
Europe
somewhere
or
something
that
just
isn't
covered
by
regular
people.
D
Kubernetes
the
abstraction
communities
gives
you
is
great
and
also
a
gaming
teams
are
loving
the
fact
they
can
run.
There
are
other
processes
on
here
too,
and
it's
just
one
platform
they
need
to
manage.
So
that's
pretty
awesome
sweet.
So,
as
I
said,
the
gana
is
an
extension.
We
have
c
RDS,
possibly
unsurprisingly,
we
have
a
game
server.
This
is
our
standard
unit
for
inside
organize,
so
you're
able
to
define
a
game
server
inside
kubernetes
is
the
CR
D.
We
can
give
it
a
name
as
we
would
do.
Normally.
D
We
have
a
port
policy,
as
we
mentioned
before,
we're
doing
direct
connections,
so
organize
will
actually
manage
which
ports
on
those
nodes
that
you
have
set
up
within
a
range
and
make
sure
they're
available
and
basically
then
allocate
ports
to
those
we're
actually
doing
for
sports
behind
the
scenes
for
interest
sake.
So
we
actually
do
that
through
and
then
the
game
server
itself.
It's
just
a
container,
nothing
more
special
than
that.
D
We
do
have
an
SDK
that
we
build
in,
but
at
the
end
of
the
day,
it's
really
just
a
container
and
we
actually
allow
a
full
pod
spec
within
our
game
stover.
So
anything
you
can
do
in
a
pod.
You
can
do
here
as
well,
much
like
pods
and
say
deployments,
as
you
would
say
in
kubernetes
parlance
or
standard
carbonates
parlance,
game
server
is
kind
of
like
our
pods
is
our
base
building
block
inside
a
ghana's.
D
So,
whereas
you
would
use
a
a
deployment
inside
kubernetes
here,
we
actually
use
what
we
call
a
fleet,
which
is
essentially
a
large
group
of
warm
game
servers
that
are
basically
sitting
there,
waiting
to
be
played
on
and
players
to
connect
to
and
play
so
a
fleet.
Much
like
a
deployment
is
a
game
server
template,
but
with
us
replicas
that
we
can
then
scale
up
and
down.
D
So
you
can
see
that
here
replicas
too,
and
we
have
a
game
server
set
just
down
here,
a
game
server
template
here,
I
should
say,
and
that
works
pretty
much
like
a
deployment.
But
you
may
realized
right.
We
don't
do
load
balancers,
so
we
need
a
way
to
be
able
to
say:
hey,
give
me
a
game
server
out
of
that
pool
of
that
fleet
and
make
sure
that
we
know
that
those
are
players
are
playing
on
it.
So,
unsurprisingly,
we
have
another
CRT.
We
have
what's
called
a
fleet
allocation
in
actuality.
D
You
would
probably
do
this
through
the
crib
minetti's
API,
but
you
can
say,
fleet
allocation,
give
me
back
a
fleet
and
it
will
give
you
back
a
game
server
straight
out
of
that
fleet.
That
is
then
moved
to
an
allocated
state.
We'll
have
a
look
at
that.
That's
actually
really
really
important
time
reasons
beautiful
go
to
the
demo
slide
beautiful,
come
on,
I
know,
I,
don't
have
all
right,
fine
I'll
do
it
here.
Instead.
B
D
My
demo
just
decided
it
wasn't.
Gonna
play
nice
Wow
all
right.
Well,
then,
I'll
talk
through
this
just
real
quickly,
I
can't
even
exit
it.
So
that's
fun,
that's
unfortunate,
but,
okay,
so
what
happens
when
you
allocate
a
game
server
out
of
fleet?
Is
it
moves
to
an
allocated
state?
This
is
important
because
if
you
have
players
that
are
playing
a
game-
and
you
cannot
shut
down
that
game,
alright,
that's
really
bad.
They
get
Matthew.
I
/
truly
shut
down
your
things,
so
we
need
a
state
that
basically
says
hey.
This
is
important.
D
So
if
you
move
a
game
server
to
allocated,
if
you
change
the
size
of
that
fleet
or
you
possibly
push
out
a
new
version,
doing
a
rolling
update
inside
fleets,
then
it'll
actually
stay
on,
touched
it'll.
Wait
for
that
game,
server
to
shut
down,
which
is
like
an
application-specific
thing
and
it'll
shut
that
down
only,
and
only
when
that
game
has
actually
finished,
which
is
really
important.
I,
don't
even
know
how
sharing,
at
his
point,
all
right,
cool.
D
What
was
he
also
gonna
say?
So?
That's
that's
kind
of
I
can't
show
the
demo.
Unfortunately,
I
was
I've,
got
a
GK
cluster
set
up
it's
over
running
it's
a
little
perfect
we're
gonna
set
up.
Sanada
can
connect
to
it,
but
you'd
see
a
game
on
there,
which
would
be
great
if
you
want
to
learn
more
about
iguanas
and
potentially
see
a
demo.
I've
got
a
bunch
of
YouTube
videos
up
on
my
YouTube
account,
but
we're
actively
looking
for
contributors.
D
So
anyone
who
has
experience
with
games
forever
scaling,
which
is
totally
cool
if
you
don't,
especially
if
anyone
has
experienced
to
end
testing,
we
have
a
full
test
suite
using
like
fake
packages,
but
we
definitely
need
more
and
testing
and
we're
looking
to
expand
out
into
doing
stuff,
with
statistic
collection
to
display
probably
around
our
consensus
Prometheus
having
some
dashboards
that
kind
of
stuff.
So,
if
anyone's
interested
in
that
side
of
things,
also
more
than
willing
to
have
people
come
in
yeah
we're
actually
building
stuff,
and
we
were
really
excited
about
the
project
and
I'm.
G
G
So,
who
are
we
so
aparna
from
google
eve
are
from
CN,
CF
and
caleb
miles
from
google
as
well?
Are
the
chairs
of
6:00
p.m.
we
also
break
CPM
into
a
set
of
sub
projects,
so
on
the
program,
management
side
and
release,
we've
got
Jase
from
Google
on
the
product
management
side.
We
have
myself
Red,
Hat
and
Dustin
Kirkland
from
Google,
and
the
marketing
arm
is
Katelyn
Barnard
from
CNCs
and
Natasha
woods
from
CNCs.
G
So
what
have
we
done?
What
have
we
been
doing
recently?
We
recently
put
up
a
new
charter
in
kubernetes
the
communities
community,
repo
which
you
guys
can
check
out.
We've
been
redefining
what
the
subject
sub-project
roles
and
and
who
the
sub-project
owners
are.
So
that's
that's
still
an
evolving
process.
G
The
biggest
part
is
that
we
have
actively
started
meeting
again
bi-weekly.
So
if
you
guys
are
interested
in
meeting
with
us,
the
it's
all
in
the
community
calendar,
so
how
can
you
help?
We
are
actively
looking
for
new
members
in
6:00
p.m.
so
member
representation
can
be
from
any
any
company.
Any
person
is
welcome
to
join,
cm,
we're
actively
looking
for
new
members
to
generally
expand
that
representation
across
companies,
especially
for
people
who
are
companies
who
are
both
frequent
kubernetes
contributors
and
kerbin.
A
defenders.
G
So
how
can
you
help
continued?
We
are
soliciting
for
cig
roadmaps.
This
is
gonna,
be
is
gonna,
be
big
in
the
next
year
or
so
we
want
to
see
cigarette
maps
for
urban
a
days.
112
I
will
be
sending
out
an
email
after
the
community
meeting
around
that.
So
is
a
sig
chair,
a
technical
lead,
if
you
guys,
can
start
planning
with
your
cig,
communicating
this
to
Signum
in
advance
of
us
bugging
you
and
filing
and
updating
issues
on
the
features
repo,
as
well
as
making
sure
those
those
issues
have
the
appropriate
future
owners.
G
So
the
future
owner
within
que
features
is
essentially
the
project
manager
for
that
feature
right.
So
so,
barring
availability
of
that
project
manager
will
will
will
default
to
reaching
out
to
it's
a
chair
for
for
updates
on
this
feature,
so
just
be
aware
of
that
ahead
of
time
Thanks.
So
there
is
a
slot.
There
is
a
slide
deck
that
you
guys
can
see
in
the
community
notes
about
where
exactly
to
reach
us.
We
are
on
slack
F,
ker,
minetti's
p.m.
G
B
G
B
G
Go
ahead
and
hit
the
next
slide,
so
sig
a
sure.
What
do
we
do
so?
This
is
cigars
everything
around
building
deploying
maintaining
supporting
kubernetes
on
Azure,
whether
it's
Microsoft's
Red
Hat,
who
have
it?
Whoever
you
are
so
we
want
to
make
sure
it's
very
clear
that
it's
independent
of
company
affiliation
and
any
specific
implementation.
G
G
So,
what's
new
we've
within
the
last
month,
or
so,
we've
put
up
a
signature
within
the
community
repo.
We
announced
the
new
leadership
team,
so
I'm
new
chief
excess
as
new
as
well,
Cole
and
and
and
Kyle
has
been
the
technical
lead
for
a
little
bit.
So
I
believe
this
was
announced
that
a
cute
Colin,
but
just
renouncing
at
this
point
so
we're
also
beginning
the
OP
discussions
around
a
sig
cloud
provider.
G
So
what's
cool,
what's
what's
what's
really
cool
OpenShift
on
Azure
is
coming,
that
is
going
to
be
a
co,
managed
co,
managed
service
between
Microsoft
and
a
Red
Hat.
That's
coming
soon.
As
soon
as
we
have
more
details,
I'll
be
giving
that
to
the
community
IKS
the
Azure
Community
Service.
It
was
announced
this
GA
within
the
last
honest
a
week
and
a
half
an
ACS
engine
now
supports
selenium
and
final
within
the
the
container
network
interfaces,
as
well
as
support
for
container
D
and
clear
containers.
G
G
Again,
exceptions
can
be
granted
on
a
case-by-case
basis,
but
part
of
the
reason
for
this
is
there.
Is
we
don't
actively
maintain
beta
versions
for
the
Azure
SDK?
So
any
any
work
done
within
that
has
some
risk
not
using
a
stable
version,
so
so
we're
making
a
hard
rule
now
that
we're
only
going
to
be
using
stable
versions
of
the
address,
a
VM,
VM
scale
sets
which
were
introduced
in
110
we're
going
we're
going
GA
with
those
there's
been
a
lot
of
work
around
stabilizing
scale
sets
as
well
as
this
support.
G
We
recently
announced
within
I
want
to
say
three
four
months,
maybe
more
at
this
point:
the
load,
balancer
and
public
IP
standard
skews,
so
we're
able
to
now
leverage
those
standard
skews
with
in
Kiruna
days
and
integration
with
kms
and
kms
and
Azure
key
vault.
G
So
what
are
we
planning
for
112?
This
is
a
generalist
subject
to
change
and
all
that
so
work
around
our
availability
zones.
So
the
idea
of
having
the
availability
zones
fault
domains,
building
that
into
native
as
your
support
on
prefer
net
days.
So
things
around
the
network
and
volume
support
explicit,
managed
service
identity,
which
is
so
think
I
am
on
AWS
being
able
to
provide
that
as
as
a
service
for
kubernetes.
G
So
each
having
each
pod
have
its
having
each
node
excuse
me
have
its
own
identity
within
within
a
cluster,
so
we're
gonna
be
actively
working
on
both
with
sig
windows
as
well
sick
note,
on
Windows
CE
RI
configuration
the
azure
app
gateway
ingress.
This
may
be
pushed
out
to
113
on
the
fly
more
work
around
the
cloud
controller
managers,
external
cloud
providers,
so
that
work
that
I
was
mentioning
with
with
Andrew
will
continue
to
happen.
The
improvements
around
CSI
so
CSI
is,
is
active.
We've
we've
got
a
few
service
that
beeped
for
you
guys.
G
So
we've
got
a
few
things
in
the
works
for
CSI,
not
just
as
a
sure
but
on
the
community
on
community
scale
across
all
providers.
So
you'll
you'll
see
more
work
around
after
disks
and
extra
file
storage
coming
in
for
112
and
the
autoscaler
work.
So
everything
within
the
career,
daddies,
autoscaler
repo,
will
be
bumps
to
support
aks,
as
well
as
in
general
bump
to
beta
keep
going.
G
B
Alright,
thank
you.
So
much
we're
gonna
go
back
to
the
regular
agenda
now
that
we
have
covered
all
the
items
in
the
agenda.
So
let's
get
to
the
announcement
section.
The
first
announcement
is:
please
pin
your
sick
meeting
info
and
agenda
doc
in
your
sig
slack
channel
now
that
the
main
calendar
is
not
on
kubernetes
got
io
/
community
meeting
in
four
is
less
discoverable.
Without
these
links
the
sig
leads
and
the
chairs
would
have
received
an
email
this
morning.
This
is
an
announcement
by
paris.
B
Actually,
it
was
sent
to
kasich
leads
so
new
zoom
settings
and
moderation
controls
have
been
discussed
over
there.
So
let's
keep
a
keep.
Our
meeting
safe
and
transparent,
says
upon
each
lead
and
the
c
to
ensure
that
the
next
announcement
is
all
sakes
and
I
think
this
was
added
by
Gwen.
When
do
you
want
to
talk
about
it?.
H
Yeah
sure
happy
to
so
there,
Carolyn
and
Nikita
did
some
really
great
work
in
figuring
out
guidelines
for
a
help
for
help
wanted
and
also
good
first
issue
labels,
and
it
was
decided
in
sake,
contracts
that
any
issue
across
the
repos
that
is
labeled
with
good
first
issue
should
also
come
with
instructions
on
where
to
get
started.
B
Well,
thank
you
if
you
want
to
reach
out
to
her
she's,
always
on
the
slack
channel
or
and
sick
country
Bex,
thanks
Gwen,
so
I'm
going
to
move
on
to
the
last
section
for
the
day.
Basically,
it's
about
shoutouts.
So
if
you
want
to
post
a
shout
out,
if
you
appreciate
somebody's
work,
there
is
a
hash
shout
out
channel
on
our
slack
so
posted
over
there
and
we
captured
those
shoutouts
and
listen
over.
There.
I
checked
them
before
the
meeting,
and
this
is
the
current
list.
The
first
one
comes
from
Jason
de
Barras.
B
You
know
as
to
neo
lit,
which
is
Lumiere
evanov
for
all
of
the
docs
contributions
for
cube,
RMB
1.11.
The
second
one
continues
back
from
Jason
it
now
this
time
it
is
Jennifer
Rondo
for
the
relentless
work
on
improving
our
Docs
and
helping
bring
some
more
structure
to
the
docs
process
for,
say,
cluster
lifecycle.
B
Now
the
third
one
is
from
Lamar
Evan,
oh
no
he's
going
back
to
Jason
and
to
list
Frost
and
Chuck
and
Timothy
st.
Clair
and
Lucas
klutz
from
for
the
relentless
grind
through
cube,
Adam
1:11
backlog,
potentially
making
it
the
best
release.
Thus
far,
the
next
shout
out
is
from
Austin
Adams
or
spot
to
Lucas
Gridley
key.
Hopefully,
I
didn't
butcher,
that
name
too
bad
for
dev
stats,
which
is
awesome
now
and
if
you
don't
know,
dev
starts
that
kate
store
io
is
where
you
can
find
those
dev
starts.
B
B
Unfortunately,
I
can't
capture
emojis,
you
know
in
a
meeting
so,
but
you
see
the
sense.
The
next
art
comes
from
Eric
fella
to
send
Lou
and
Benjamin
alder
for
being
ever
diligent
about
reviewing
PRS
in
a
timely
manner.
The
next
our
outcomes
from
Josh
burkas
to
konstantinos
for
actually
beta
testing,
1:11
and
spawning
a
bug
before
rc1.
B
The
next
hour
comes
from
Kirsten
as
a
shout
out
to
Gwen
for
always
generously
helping
new
folks
get
started,
contributing
to
Kate's
and
also
for
completing
her
first
major
technical,
PR,
Congrats
Quinn,
and
the
last
is
an
acknowledgement
by
coin.
Basically
saying:
oh,
no,
really
I
couldn't
have
done
it
without
so
much
help
from
Christoph
blacker,
cold,
Wagner,
Erik,
Farah
and
Benjamin
out
there.
Everyone
was
super
nice
and
patient
and
help
me
learn
so
shout
outs
to
them.
B
I'm,
so
grateful
and
the
last
shot
out
that
I
did
not
put
in
the
shoutouts
channel
is
my
from
my
personal
shout-out
goes
to
George
and
Paris,
who
have
been
done
tying
a
fantastic
job
running
this
community
meeting
and
helping
me
be
the
host
of
the
community
meeting.
So
I
really
give
a
shout
out
to
them.
Helping
me
prepare
taking
time
out
and
working
with
me.
So
hopefully
the
meeting
went
well
and
everybody
had
a
good
time.