►
From YouTube: Kubernetes Community Meeting 20180816
Description
This is our weekly community meeting, for more information check this page: https://contributor.kubernetes.io/events/community-meeting/
A
Hi
everybody-
I
am
aaron
kirk
and
berger,
also
known
as
aaron
of
sig
beard,
also
known
as
aaron.
The
member
of
this
kubernetes
steering
committee,
also
known
as
heir
and
the
co-founder
of
sig
testing,
also
known
as
aaron
the
guy
who
won't
shut
up
about
github
labels
on
the
kubernetes
dev
mailing
list
and
I.
Am
this
week's
host
for
the
kubernetes
community
meeting
today
is
Thursday
August
16th.
This
meeting
is
being
publicly
recorded
and
will
be
posted
to
YouTube.
A
So
please
be
mindful
that
what
you
say
here
is
on
the
permanent
record
and
will
be
so
forever
and
ever
till
a
black
hole
opens
up
and
swallows
us
into
the
void
on
today's
agenda.
We
have
demo
coming
up
on
the
kubernetes,
yes
controller
for
Kong
and
then
we're
going
to
have
the
usual
release
updates
and
a
couple.
Sig
updates
today
from
cig
docks,
IBM
cloud
and
auto
scaling.
So
with
that
I
will
hand
it
over
to
Harry
for
the
demo
now
hello.
B
Everyone
this
is
Harry
I'm,
just
going
to
go
ahead
and
share
my
screen.
I
hope
you
guys
can
see
my
screen,
so
I
do
cloud
infrastructure
and
software
at
Kong.
As
you
all
might
be
aware,
Kong
is
a
very
widely
adopted,
open
source,
API
gateway
out
there.
We
are
built
on
top
of
nginx
so,
but
we
inherit
nginx
performance
and
the
design
decisions.
B
We
basically
extend
nginx
for
more
flexible
routing
and
dynamic
configuration
and
also
have
a
plug-in
configuration.
So
you
can
execute
custom
logic
in
your
request:
processing
at
the
ingress
layer
or
at
the
api
gateways
and
execute
common
code.
That
is
common
to
your.
My
call.
Your
micro
services
inside
Kong,
so
I've
got
I,
will
just
jump
directly
into
the
demo,
start
deploying
and
then
explain
the
architecture
of
how
we
have
designed
our
interest
so
we'll
go
ahead
and
execute
our
deployment,
which
creates
a
bunch
of
custom
resources
and
arbok
related
issue.
B
Our
work
rules
and
service
account
and
also
creates
a
database
which
is
used
by
Kong.
So
here's
here's
how
we
do
our
deployment.
We
have
Kong
in
data
plane,
mode
and
control,
plane
mode.
The
data
plane
node
nodes
proxy,
all
the
traffic
for
your
services
and
the
control
plane,
Kong
configures
those
data
plane
nodes.
Why
of
a
database,
the
increased
control
we
have
designed
it?
B
All
the
call
things
related
to
conger
deployed
in
a
specific
kong
node
kong
namespace,
which
can
proxy
traffic
for
all
your
namespaces.
Also,
as
you
can
see
here,
we
with
the
proxy
traffic
directly
to
pods
pea,
bypass
queue
proxies
which
helps
us
in
in
push
in
having
features
like
sticky
sessions
and
load,
balancing
policies
and
Kong.
So
that's
the
basic
ingress
model
we
have
and
you
listen
for
increase
resources.
We,
since
increase
resources
fairly
limited
and
narrow.
B
We
also
have
a
custom
resource
called
Kong
ingress
I'll
just
go
ahead
and
show
the
Kong
ingress
here
where
we
have
three
things:
we
have
proxy
route
and
upstream
so
with
proxy.
You
can
specify
if
you
want
to
further
proxy
HTTP
or
HTTPS
traffic
to
a
particular
path.
You
can
specify
connection
details,
routing
wise
also.
We,
you
can
specify
priority
of
regex
if
you
want
to
accept
traffic
only
for
HTTP
and
not
HTTP.
B
If
you
want
to
accept
traffic
based
on
request
methods
and
things,
and
if
you
can,
we
also
have
option
to
customize
hill
checks,
we
active
and
passive.
So
that's
that's
the
gist.
I'll
go
ahead
and
check
if
pods
are
up
okay,
so
the
it's
still
being
initialized,
which
this
usually
takes
a
couple
of
minutes
since
it
provisions
a
booze
cruise
database,
and/or,
ingress
controller,
so
I
will
go
ahead
and
deploy
a
dummy
application.
Meanwhile,
which
is
basically
just
a
echo
server
and
then
try
to
get
the
URL
of
the
server.
This
is.
B
This
is
exposed
as
note
port.
So
while,
while
that's
being
done
the
ingress
resource
we
create,
we
also
have
something
called
plugins,
so
plugins
in
Kong
basically
allow
you
to
execute
a
bunch
of
custom
code,
so
here's
an
example
of
a
plug-in
resource
that
I
have,
which
basically
is
a
rate
limit
configuration
where
you
can
limit
traffic
upstream
to
your
proxy,
and
here
we
are
saying
that
we
want
to
limit
traffic
to
only
three
requests
per
minute
and
under
it,
requests
for
our
and
will
limit
by
the
clients
IP
address.
So
you
can.
B
We
can
create
these
custom
objects
in
kubernetes,
object,
store
and
then
apply
those
to
our
ingress
resources,
while
annotations
and
Kong
will
take
care
of
the
rest.
So
you
don't
need
to
know
the
internals
of
konga
just
need
to
know
what
open
source
plugins
we
have.
We
have
over
60
plugins,
which
are
open
source,
so
our
service
is
up
and
running
so
I'll.
This
proxy
I'll
send
a
request
directly
to
the
service.
B
Alright,
it
seems
like
our
pod
is
up
and
running.
I
am
now
go
ahead
and
create
a
plain
ingress
object.
So,
as
you
can
see,
we
are
proxying
for
host
food
or
bar
and
all
so
since
it
at
rude
paths.
Everything
goes
upstream,
so
I'll
go
ahead
and
create
our
ingress
rule
now
I'm
going
to
source
a
file
which
basically
is
just
going
to
set
some
environment
variables
related
to
our
in
our
proxy
IP
and
proxy
port.
So,
as
you
can
see,
we
have
kong
proxy,
which
is
the
data
plane.
B
B
Let's
see
all
right
so
Kong
is
up
and
running.
So,
let's
see
I
think
I
did
not
source
the
hosts
source.
The
file
like
at
to
the
file
all
right.
So,
as
we
can
see
here,
we
cut
the
proxy
the
request
upstream
and
it
was
a
proxy
by
Kong
and
we
can
still
that
wire,
be
a
header
and
Kong
injects
some
some
latency
traffic
here
so
to
get
into
a
little
bit
inside
of
Kong.
What
internally
Kong
is
doing.
This
is
the
admin
API
of
count
and
I'll
get
the
targets.
B
Hash
based
or
cookie
based
routing,
we
can
do
based
routing,
so
we
can
configure
that
if
I
now
confer
sender
okay,
so
if
I
now
send
the
request,
this
was
handled
by
pod,
IP
0.8
and
let's
do
it
again,
it's
still
handled
by
0.8.
So
we
have
static,
consistent
hashing,
so
sometimes
it
will
go,
it
will
not
round-robin
it
will.
It
will
have
a
full
state
machine
inside
and
will
cut
route
the
traffic
accordingly.
So
this
is
how
this
is
a
plain,
simple:
ingress
works.
B
We
support
multiple
services,
we
support
TLS
upstream
and
TLS
terminations
and
certificates.
Next
I
am
going
to
go
ahead
and
quickly
show
how
a
plug-in
would
work
so
Kong
plug-in
is
a
custom
resource.
Think
of
it
more
like
a
config
map
rather
than
a
resource.
So
here
we
go
ahead
and
create
our.
We
are
plug-in
resource
and
then
and
I'll
go
ahead
and
patch.
Our
ingress
object
to
say
to
tell
our
Kong
ingress
that
apply
the
rate-limiting
plug-in
and
use
this
configuration.
B
So
the
ingress
is
patched
and
this
rate
limit
plug-in
should
basically
limit
requests
to
three
requests.
It
should
not
allow
you
to
send
more
than
three
requests
in
a
minute
and
should
inject
the
headers
that
are
necessary
to
convey
that.
So
this
is
a
request
here
and,
as
you
can
see,
Kong
has
injected
for
headers,
which
tell
you
the
limit
and
the
number
of
requests
remaining.
B
As
you
can
see,
we
can
make
two
more
requests,
so
we
will
make
one
more,
and
this
should
really
be
our
last
request
and
as
we
can
see,
the
rate
limit
is
exceeded.
So
this
is
how
you
can
configure
authorization
and
you
can
transform
requests
at
Kong.
You
can
transform,
you
can
do
authentication,
you
can
do
auth
YDC
and
the
plug-in
data
SDK
is
open,
source
and
yeah.
That's
all
I
had
to
demo
today.
A
B
B
A
C
C
Another
time
flies
by
were
40
days
to
release
target
a
couple
happenings
this
week,
we've
created
the
release,
1.12
branch
and
we're
doing
regular
fast
forwards
now
pulling
in
master
into
the
branch
for
the
time
being
for
the
kind
of
the
next
two
and
a
half
weeks
and
then
obviously
more
targeted
as
code
trees
happens,
branch
CI
is
on
track
to
arrive
this
week
for
that
release.
112
branch
and
our
CI
signal
is
mostly
okay.
C
A
So
in
the
meeting
notes
you
can
see
the
latest
patch
releases
that
were
cut
and
when
they
were
cut
I
believe
we'll
have
some
upcoming
patch
releases
in
the
next
week
or
two
you
moving
on.
Let's
see
graph
of
oh,
that's
me
graph
of
the
week.
Okay
I'm
gonna,
see
if
I
can
actually
keep
myself
to
a
few
minutes
here.
A
So
gonna
share
my
screen
all
right.
So
let's
talk
about
flaky
and
failing
tests.
I
don't
know
about
y'all,
but
my
day
today,
kind
of
involves
sweeping
through
about
fifty
to
sixty
issues
and
pull
requests
a
day
at
a
minimum,
so
I
kind
of
personally
as
a
human
being
kind
of
notice
when
things
feel
a
little
bit
off
in
terms
of
the
flow
of
the
spice
I
mean
the
commits
into
kubernetes.
A
The
quickest
signal,
aside
from
being
a
human
being,
is
to
go.
Take
a
look
at
this
test
grid
dashboard
called
pre
submits
kubernetes
blocking.
We
also
have
a
non-blocking
pre
submits
dashboard.
Those
are
tests
that
don't
necessarily
prevent
PRS
from
merging,
but
this
dashboard
shows
everything
that
does
prevent
PRS
for
merging
and
actually
right
now,
it's
doing
a
great
job
of
demoing.
How
test
grid
isn't
always
perfect?
I
can
actually
safely
ignore
this
e
to
this
red.
Looking
thing,
because
there's
no
test
failure
in
here
named
overall
meaning
the
entire
job
is
failing.
A
You
can
come
to
this
dashboard
to
see
if
that's
true
or
not-
and
this
is
test
grids,
sort
of
weird
quirk-
it
kind
of
considers
a
test
failing
unless
it
actually
sees
a
pass
for
it
within
some
window,
and
we
haven't
quite
gone
past
that
window
here.
But
so
aside
from
that
red
thing,
everything
kinda
looks.
Good
I
can
also
see
these,
like
percentage
of
runs,
are
failing
over
the
past
week.
But
that's
a
number
of
what,
if
I,
wanted
some
more
graphical
way
of
seeing
how
things
were
failing
over
the
week.
A
So
I've
showed
this
dashboard
a
couple
arms
before
this
is
velodrome.
It's
basically
just
a
graph
on
an
instance
that
pulls
some
stuff
out
of
an
influx
DB
instance.
We
have
a
bunch
of
panels
here.
The
panel
I
really
want
to
focus
on
today
is
called
the
presubmit
failure
rate
graph.
So
if
I
click
on
this
and
do
view
to
expand
it
up
here,
some
of
you
may
have
been
feeling
like
cops
was
failing
a
bunch
recently.
A
It's
the
blue
line
here
that
I'm
hovering
over
and
it
was
it
was
failing
pretty
continuously,
but
something
that's
been
pretty
insidious
and
creeping
up
over
time
is
this
purple
line
which
corresponds
to
the
integration
tests
that
run
on
your
pull
requests,
and
you
can
see
that's
been
kind
of
going
up
over
time.
I
want
to
give
a
big
shout-out
to
Janet
quo
for
actually
putting
it
in
the
PRS
to
fix
this,
and
you
can
see
the
flakiness
is
going
back
down.
A
I
know
this
looks
super
noisy,
but
it
does
kind
of
help
us
identify
which
job
might
be
the
culprit
when
the
test
grade
summary
dashboard
doesn't
do
a
great
job,
so
who's
actually
responsible
for
fixing
these
things.
Or
do
we
actually
know
if
these
things
are
happening?
So
your
friend
BCI
signal
person
kind
of
helps
with
this,
but
you
all
can
help
with
this
too.
If
you
notice
that
a
test
is
failing,
you
can
file
a
bug
here,
we're
looking
at
all
the
bugs
that
have
the
label
kind
flake.
A
A
That's
the
kind
failing
test
thing
you
can
see
a
lot
of
people
like
to
also
tag
these
as
bugs
or
critical,
urgent
or
a
bunch
of
random
things,
we're
still
kind
of
working
on
our
label,
hygiene,
but
kind,
failing
test,
failing
all
the
time
and
finally,
who
actually
is
responsible
for
fixing
these
things,
so
you
can
kind
of
take
a
look
at
who
actually
owns
the
test
case
in
question
oftentimes
in
test
grade
or
in
the
test
name
itself.
You'll
see
this
little
tag
of
sink
foo,
so
you
should
probably
go
ask
six
ooh
hey!
A
What's
going
on
with
your
test,
if
it
seems
like
it's
a
broader
problem,
you
should
probably
also
go
take
a
look
at
who
owns
the
job
and
I.
Don't
really
have
as
great
of
an
answer
for
this
we're
moving
to
a
system
where
all
jobs
are
defined
in
directories
that
correspond
to
the
sig
that
owns
the
job
in
I've
drops
the
directory
in
the
meeting
notes
there,
but
generally
like
if
the
job
has
network
in
the
name,
it's
probably
sig
network,
if
the
job
has
node
in
the
name,
it's
probably
sig
note.
A
D
I'm
here,
thanks
hi
there
so
I'll
be
presenting
this
cap
on
behalf
of
an
issue
Davidson.
My
name
is
Chris
Hodge
from
cig
OpenStack
and
she
is
from
Sega
AWS
and
we
both
have
been
participating
in
the
new
state
cloud
provider
which
which,
which
is
a
sink
that
is,
is
meant
to
kind
of
coordinate
all
of
the
efforts
across
all
the
cloud
providers,
and
you
know,
and
and
and
help
focus
on,
common
goals
and
outcomes,
and
one
of
the
one
of
the
first
goals
that
were
working
towards.
D
D
Now,
in
addition
to
this,
we
are
also
all
the
cloud
providers
are
being
moved
out
into
external
cloud
providers
that
are
being
maintained
by
working
by
sub
working
groups
within
six
odd
provider,
and
so
we're
also
looking
at
as
a
second
goal
to
produce
a
set
of
documentation
which
accomplishes
the
same
thing
for
the
out
of
tree
providers.
Now
this
adds
a
little
bit
of
complications
because
there's
two
levels
of
this.
The
first
is:
how
do
you
activate
an
external
cloud
provider?
D
Do
it
in
a
consistent
way
with
with
external
tooling
such
as
cubed
min,
that
makes
that
successful
and
positive
and
so
sig
cloud
provider
is,
is
going
to
attempt
to
produce
documentation
and
improve
the
documentation
for
how
you
load
the
external
providers
and
then
also
and
then,
and
then
also
set
standards
for
minimal
documentation
for
all
the
external
cloud
providers.
So,
at
a
high
level,
that's
what
this
cap
is
meant
to
accomplish
and
I'm
happy
to
take
any
questions
and
see
if
I,
if
I
missed
anything,
please
let
me
know.
E
D
We've
had
a
few
meetings
with
with
with
the
cluster
lifecycle
team,
and
you
know
that's
something
that
is
is
definitely
on
our
radar
and
when
we,
you
know,
when
there's
were
there
opportunities
to
collaborate.
You
know,
we've
asked
them
to
kind
of
work
with
the
six
cloud
provider
to
work
with
them
on
that,
and
so
you
know
that
those
communications
haven't
been
particularly
strong
in
the
last
few
weeks,
but
but
we've
definitely
had
conversations
with
them
and
we're
kind
of
aware
of
the
efforts
are
being
shared
by
both
of
these
things.
F
Chris,
if
I
they
add
a
lot
of
this
work
at
the
cap,
was
done
primarily
because
Joe
Beda
and
tipsy
Clair
from
C
cluster
lifecycle
we're
pushing
for
it.
So
I
do
want
to
give
that
the
credit
to
like
push
push
us
to
actually
create
this
gap
and
Joey.
Yes,
we're
working
towards
improving
all
the
documentation.
Well,.
E
No
I
mean
yeah,
and
this
is
great
work
and
thanks
for
doing
this
and
in
terms
of
it
I
think
you
know
it
starts
with
documentation.
Then
we
can
take
a
look
at
it
and
say:
how
can
we
simplify
that
by
improving
q,
Badman
and
building
code
so
that
you
know
the
best
documentation
is
the
stuff
that
is
so
obvious.
You
don't
need
it,
but
we're
clearly
not.
There
agreed.
A
All
right
thanks
so
much
Chris
and
Nishi
and
yeah
hugely
supportive
of
the
cloud
providers,
sakes
efforts
to
sort
of
simplify
some
of
our
six
brawl
that
we
have
going
on
and
trying
to
figure
out
the
appropriate
use
of
commonality
across
cloud
provider.
Implementations
and
Doc's
are
hard
they're,
the
funnest
part
of
it
all
hey
speaking
about
that.
We
have
six
dates
and
our
next
sake
to
give
an
update,
sig,
Doc's
nice.
G
G
Cool
okay,
so,
what's
going
on
so
yeah
1.12
I
guess
is
underway
and
Zach
Arnold
is
the
docs
lead,
misty
hacks
refactored.
The
contributor
guide,
which
is
great.
This
is
for,
like
the
docs
contributors
to
help
guide
people
that
want
to
contribute
to
the
doc
set.
Let's
see,
one
of
the
PRS
that
are
kind
of
under
consideration
right
now
is
adding
alternative
search
engines
for
the
folks
in
China,
so
that
might
be
Bing
or
Baidu,
or
something
like
that.
G
So
we're
trying
to
figure
out
like
what
that
should
look
like,
let's
see,
and
then
we
need
sort
of
a
working
group.
That's
focused
on
the
generated
docs.
For
example,
there's
like
a
a
cubelet
documentation
issue
where
we're
not
getting
all
the
information
from
the
that's
embedded
in
the
source
code
and
like
either.
G
We
can
all
agree
on
and
then
generate
like
some
diagrams
and
stuff
to
help
users
really
understand
what's
going
on
in
the
system,
because
one
of
the
feedback
from
the
UX
studies
that
we
did
recently,
where
that
there
weren't
enough
diagrams
and
on
ones
that
we
had
weren't
that
helpful.
So
we
want
to
try
to
you,
know
address
that
and
then
that
currently
was
initially
I.
Think
ten
of
lis
accepted
but
they're
working
out
sort
of
the
contract
with
the
contractor
see.
A
G
Then
we
are
doing
so
starting
this
weekend
is
right.
The
docks
in
Cincinnati,
so
sig
docks
will
be
doing
like
a
PR
bash
and
a
docks
print
there
and
that
will
be
on
Monday
and
then
the
main
thing
which
I
wanted
to
go
over
today
was
the
search
outage.
So
probably
most
of
you
know
like
last
week,
or
so
you
would
notice.
A
lot
of
the
search
results
were
dropping
off
her
meds
aisles.
G
So
when
you
do
a
search
for
gray
stuff,
you
wouldn't
see
any
results
that
we're
going
to
that
were
links
to
grenades
Daioh,
and
so
we
did
a
post-mortem
for
that
and
I'm
not
going
to
go
into
all
the
details.
But
we
we
finished
it,
though
the
long
and
short
of
it
is
that
when
we
started
offering
the
version
Docs,
we
had
to
make
sure
that
those
weren't
indexed
in
the
search
results
include
the
search
results.
G
So
we
would
so
we
would
add
the
X
row
box
tag
no
index
to
the
header
for
those,
but
then
through
a
series
of
like
separate
PRS
like
that
mechanism
got
stepped
on
and
it
accidentally,
the
no
index
header
ended
up
in
the
production
master
in
the
the
bill
for
the
production
site.
So
that's
what
caused
it
to
drop
off.
So
we
did
this
post-mortem,
which
explains
like
what
the
history
is,
what
what
the
mechanism
is,
and
then
we
have
takeaways
for
how
to
fix
that.
G
So
primarily,
so,
primarily,
what
we're
gonna
do
is,
first,
like
make
sure
we
hand
off
all
the
infrastructure
and
tooling
to
ciencia
and
Luke
Perkins
will
then
own
that
and
I
will
help
him
document,
like
you
know
all
those
mechanisms
and
they
will
be
in
the
documentation
repo.
So
if
people
need
to
see
that
that
way,
the
knowledge
is
not
compartmentalized
in
particular
individuals
and
then
the
other
things
that
we're
planning
to
do
are
add
some
so
I'm
like
testing
and
monitoring.
Just
so
that
you
know.
G
First,
we
can
make
sure
when
the
production
site
is
built,
that
these
things
don't
happen,
but
then
also
we
should
monitor
like
the
indexing,
the
Google
indexing.
So
if
something
happens,
it
may
not
even
be
this
sort
of
thing
specifically,
but
if
it
goes
up
or
down
like
we'll
be
notified
and
then
the
other
thing
was
doing
it
as
a
better
failsafe
default
state,
because
the
problem
was
like,
since
master
was
the
only
branch
that
didn't
need
to
be
didn't
need
to
have
the
no
index
on
it.
That
was
set
as
sort
of
an
exception.
G
But
then
you
know
when
all
when
this
got
stepped
on,
then
the
default
state
was
nothing
got
index
and
that's
probably
a
more
terrible
state
than
if
everything
was
indexed,
so
we're
gonna,
you
know
redo
it
so
that
the
default
state
is,
is
better
and
I
will
be
taking
this
postmortem
and
putting
it
back
into
the
github
issue
so
that
it's
more
visible
and
then
people
can
read
that
and
I
will
put
a
link
to
that
in
the
notes.
After
after
that's
done
and
I,
think
that's
it
did.
Anyone
have
any
questions.
A
A
B
All
right
so
I
have
a
slide
deck
link
in
the
pigeon
de
I'm
gonna
share
my
screen
here
screen.
Oh,
it's:
okay,
Lincoln,
st.
yep
yep
thanks
all
right,
so
hello,
everyone,
my
name
is
salli
wa,
salim
Khalid
for
the
cigar
beam
cloud
and
in
line
with
what
Aaron
introduce
himself
I'm,
also
known
as
a
member
of
Aaron's
fan
club
to
some
people
in
country
bags
already.
B
B
We
have
IBM
cloud
cumulative
service,
IKS
and
IBM
cloud
private
ICP.
On
the
private
side.
They
both
participated
in
since
EF
in
conformance
program,
and
they
both
are
certified.
Kubernetes
the
sea
meets
every
other
week,
every
other
Wednesday,
so
they
it
just
constantly.
We
had
a
meeting
yesterday
about
seven
to
ten
regular
attendees
in
the
sig
and
the
meetings.
All
the
meetings
are
recorded
and
it's
available
on
YouTube.
Even
news
channel
about
sig
leads.
So
we
have
three
sig
leads
Richard
from
I,
guess
team.
B
You
can
read
more
about
the
sig
on
the
on
the
read
me
page
here,
I
put
the
link
and
if
you
are
interested,
please
do
join
the
signaling
list
to
keep
up
with
what's
happening
in
the
sea.
So,
there's
a
link
there
too,
before
I
move
to
the
next
slide.
You
know,
as
I
mentioned,
this
is
the
the
first
time
we
are
forming
updates
to
the
community
meeting
you
seek
so
I
want
to
take
this
as
an
opportunity
to
thank
you.
Some
of
the
you
know,
awesome
people.
B
Everyone
in
the
community
will
help
with
the
processes
as
we
were.
Creating
this
sake.
It
specifically
perished
George
and
hor
a
lot
of
health.
Whether
it
was
you
know,
he
was
from
the
YouTube
from
the
YouTube
settings
or
zoom
accounts
or
eater
or
anything
I
mean
this
guys
were
always
available
to
help.
So
thank
you
so
much.
B
Moving
to
the
next
slide,
so
what
are
the
discussions
we
are
having
so
far
in
the
sig
and
the
sig
meetings?
The
first
I
would
like
to
mention
that
we
had
a
couple
of
chats
with
Andrew
keen
eyes,
sig
lead
for
the
provider
and
we
will
be
working
on
moving
a
sub-project.
On
a
related
note,
we
don't
have
a
public
clipper
yet
on
the
IBM
cloud
border
of
Coates,
something
his
it's
work
in
progress,
another
sig
lead
Richard,
he's
working
on
it
and
he
was
actually
present
in
the
next
slide.
B
B
B
One
of
the
things
I
want
to
mention
here
is
you
know
every
sick
meetings?
First
10
15
20
means
we
have
SMEs
from
IKS
and
I
see
be
talking
about
some
of
the
new
features
as
they
are
made
available
in
both
of
those
platform,
and
you
know
any
anything
related
to
cuban
IDs.
That
is
beneficial
to
the
see
discussions
absurd.
B
Some
of
the
other
interesting
topics
we
had
including
Kuban,
is
a
update,
support
strategy
in
IBM
cloud
continued
service.
We
had
like
20
minutes
of
talk
from
Richard,
but
in
general
you
know
I.
It
basically
supports
three
concurrent
releases
at
any
time
right
now,
it's
four!
Actually
just
day
before
yesterday,
there
was
a
announcement
made
for
one
point.
B
One
point
two
support
in
I
guess,
so
it's
still
doing
one
point,
eight,
two,
all
the
way
to
one
point:
eleven
cluster
multi
here
available
his
own
support
was
added
just
some
time
back
and
we
had
a
good
talk.
Discussions
on
that
on
the
IBM
cloud,
private
side
again
for
reason,
demos,
and
then
we
had
a
couple
of
different
talks
with
the
updates
on
the
IBM
cloud.
Private
two
point:
one
point:
O
three,
which
was
released
in
May
and
on
the
scalability
side.
B
I
have
actually
a
link
provided
here,
its
blog
post,
with
the
findings
on
our
skill,
be
testing
in
lessons.
We
learn
there.
So
ICP
is
right
now
certified
for
1000
notes
and
it's
a
work
in
progress.
It's
incremental
work.
We
have,
you,
know
more
more
data,
more
sort
of
documentation
available
sometime
soon,
with
bigger
scaling,
side
mm
or
so
plus
notes
for
the
future
meetings.
We
have
a
number
of
topics
out
there.
B
It's
on
the
seek
agenda
a
page,
so
you
know-
and
that
includes
support
I,
mean
topic
around
support
for
in
hybrid
clouds,
deploying
workloads
between
IKS
and
ICP
or
migration,
that
kind
of
things,
reviews
related
around
performance,
etc.
So,
if
you're
interested
interested,
please
you
attend
the
meetings
we
would
like.
We
would
love
to
have
you
in
the
meetings
and
discussions
and
for
the
next
slide
on
some
of
the
collaboration
we
have
going
there
for
some
time.
B
H
Things
take
it
sure
everyone,
so
this
is
just
some
of
the
community
collaboration
members
of
these
IBM
cloud
team
last
year,
or
so
so
this
is
some
of
this
collaboration
has
been
done
before
we
officially
form
the
stable,
but
in
areas
of
network
policy
and
scalability
and
storage.
Some
of
the
items
we've
been
we've
been
trying
to
work
with
that
upstream
on
the
community
side
and
and
trying
to
develop
our
lessons
learned
from
the
scalability
in
our
public
cloud
on
cluster
creation
and
monitoring
and
performance
there.
A
E
H
E
So
for
all
kind
of
cover
things
by
the
different
topics,
so
for
those
of
you
who
don't
know,
sig
auto-scaling
is
in
charge
of
basically
anything
involved
with
automated
scaling
of
things
and
cruising
at
ease,
whether
or
not
whether
their
pods
or
the
cluster
itself
or
components
of
the
cluster.
So
in
this
release
we
kind
of
had
two
big
areas
we
were
working
on
so
for
the
horizontal
pod
autoscaler
we're
working
on
removing
scale
limits
in
favour
of
kind
of
more
sophisticated
behavior
for
determining
when
it's
appropriate
to
scale.
F
E
E
That
says,
you
know,
method
:
get
and
that
will
translate
into
the
query
that
it's
ends
up
being
made
to
your
custom,
metrics
storage
mechanism.
That
also
has
improvements
around
metric,
specifying
target
average
values
and
we
kind
of
went
through
the
API
itself
and
did
a
little
bit
of
cleanup
around
kind
of
where
we
special
case
certain
things
and
things
more
uniform.
E
A
Excellent,
thank
you
very
much,
Sully
all
right.
That
leaves
us
with
15
minutes
to
go
through
this
week's
announcements.
So
I
just
scraped
through
the
shoutouts
Channel
and
I
may
put
your
pronunciation.
So
I
super
apologize
in
advance,
but
we're
gonna
give
a
shout
out
to
Dee
you
for
being
such
an
active
reviewer
and
reviewing
lots
of
incoming
PR
so
quickly,
a
shout
out
to
Arnaud
Newcomb
and
Jeremy
Richard
for
being
awesome,
bug,
triage
shadows
and
handling
the
job
wonderfully
well.
It's
going
to
be
R
was
out.
A
Last
week
the
112
bug
triage
person,
misty
hack,
shout
out
to
Ian
white
Roy,
who
has
become
a
Kate
sort
member
in
order
to
work
on
the
Korean
localization
and
is
already
providing
great
feedback,
as
evidenced
by
a
link
you
can
see
in
the
meeting
notes
now
and
then
we
loves
us
our
slack
emojis.
Thank
you
so
much
to
Jay,
singer,
Dumars
for
creating
the
test
grid
emoji
and
then
also
the
test
grid.
Real
talk,
emoji,
which
is,
of
course,
all
read.
I
That's
behind
to
the
announcement
is
that
the
Syrian
Committee
elections
are
coming
next
week.
Everyone
will
be
receiving
an
email,
that's
going
to
explain,
eligibility
candidacy,
etc.
So
look
a
look
out
for
that
on
the
kubernetes
dev
mailing
list.
We
will
be
obviously
advertising
on
many
many
other
channels.
However,
the
main
communication
channel
will
be
that
list
and
then
a
voters
guide
will
alternately,
be
your
single
source
of
truth.
That's
checked
into
github,
but
all
details
again
will
be
laid
out
next
week
in
an
email.
A
Let's
see,
I
was
in
the
middle
of
trying
to
drop
this
into
the
meeting,
release,
notes,
but
I
think
there's
a
link
in
there,
so
you
probably
saw
an
email
from
Christoph
Blocher
to
kubernetes
dev
contributor
experience.
Members
have
also
seen
this
we're
basically
proposing
that
the
path
to
kubernetes
membership
it
used
to
involve
emailing
a
Google
Group
address
and
then
just
kind
of
waiting
for
a
while
until
somebody
would
approve
the
TR
we're
gonna
be
moving
to
an
issue
based
system
where
you
file
an
issue
against
the
kubernetes
/or
repo.
A
It's
already
going
to
be
filled
out
with
the
template
of
what
you're
supposed
to
do,
and
we
think
this
is
going
to
greatly
improve
the
latency
for
adding
members
to
the
org
right
now.
The
github
administration
team
is
going
to
service
these
issues
as
human
beings,
and
we
are
in
the
process
of
setting
up
automation
to
help
facilitate
this
in
the
future.
A
Some
of
you
have
probably
noticed
I've
been
going
around
to
all
of
the
different
SIG's
or
repositories
to
trying
to
make
sure
that,
basically,
we
as
a
project
start
to
kind
of
do
the
same
things
across
the
entirety
of
the
project,
to
help
us
with
maintenance,
auditability
consistency,
reliability.
What
what
have
you
the
things?
I
care
about
right
now
are
kind
of
coming
at
you
from
a
steering
committee
perspective,
but
they
also,
coincidentally,
will
help
our
automation
down
the
line.
A
The
first
of
those
is
making
sure
that
every
single
repo
we
own
in
every
single
org
we
own
lives
inside
of
the
sig
CMO
file.
This
is
my
effort
to
tell
you
people
to
enumerate
your
sub
projects.
What
is
the
code
that
your
sake
owns?
I
really
want
to
give
a
shout
out
to
CID
cloud
provider
and
the
other
related
SIG's
that
have
to
do
with
clouds
like
cig,
OpenStack
and
cig
VMware
and
sig
IBM
cloud
for
kind
of
sorting
out
where
the
cloud
provider
repos
should
fall
within
different
six
sub
projects.
A
That
was
hugely
helpful.
You
know
one.
Ultimately,
the
ownership
of
code
should
fall
to
a
single
sig
right
now,
I,
don't
care
which
sig
that
is
I
just
want
to
make
sure
I
know
who
I
can
go
talk
to
if
I
want
to
do
something
with
that
code
and
people
getting
their
opposed
ahead
to
sig.
Zmo
is
helping
mention
this
last
week
and
Thank
You
Nishi
for
getting
the
AWS
sub
projects
baked
in
there
as
well
cinema
larly,
once
I
know
what
the
repos
are.
A
A
Once
repositories
have
owner's
files,
we
can
start
to
use
the
merge
automation
that
we
use
on
I,
don't
know
50
to
60
something
of
our
repos
right
now,
where
you
can
use
/l
g
TM
and
slash
approve
and
then
something
will
automatically
merge
the
pull
request.
If
those
two
labels
exist,
none
of
the
do
not
merge.
Labels
exist
and
all
tests
pass.
This
is
tied.
Tight
is
coming.
A
Tight
is
used
for
every
single
repository
that
has
merged
automation,
except
for
kubernetes
kubernetes,
which
still
uses
the
submit
queue,
and
we
are
in
the
process
of
seeing
if
we
can
swap
out
the
submit
queue
for
tied
prior
to
code.
Freeze
we're
actively
working
with
the
cig
testing
is
actively
working
with
cig
release
to
figure
out
if
we
can
make
that
happen.
A
Similarly,
a
lot
of
the
automation
requires
that
the
same
labels
exist
everywhere.
So,
for
example,
the
automation
that
automatically
labels
pull
requests
based
on
the
number
of
lines
changed
applies
a
size,
/s,
l
or
XL
label
right
now
it
just
kind
of
creates
those
if
they
don't
exist.
So
there
are
a
bunch
of
repos,
like
all
of
the
kubernetes
CSI
repos
I
have
a
bunch
of
gray
bland,
boring
labels.
There
I
plan
on
turning
on
label
syncing,
so
that
the
label
descriptions
are
set
through
the
gate
hub
API,
the
labels
are
colored
the
same
way.
A
You
see
them
colored
in
every
other
repo,
and
basically
this
means
that
if
you
work
on
the
crew
brunette
EES
project,
the
same
things
are
gonna,
basically
work
the
same
way
in
every
repo,
not
just
the
the
main
ones.
I
hope
this
is
okay
with
everybody.
I've
tried
to
make
a
push
for
this
in
the
past
and
got
some
pushback
I'm,
I
kind
of
expect
to
run
into
some
pushback
again
and
that's
okay.
I
I
really
want
to
figure
out
how
we
can
iterate
and
move
things
forward.
So
your
help
and
patience
is
greatly
appreciated.
A
Okay,
cool
I
see
the
contributor
experience.
People
are
really
happy
about
a
consistent
developer.
Experience,
which
is
great
I,
just
want
to
make
sure
I'm
like
not
talking
into
an
echo
chamber,
so
they're
really
appreciative
of
it,
but
I
want
to
make
sure
I'm
actually
getting
in
touch
with
the
people
who
are
heads
down
and
making
things
happen
and
that
I'm
actually
improving
their
daily
lives,
not
in
Keating.
E
Just
wanted
to
give
a
reminder
to
people
since,
were
you
know,
coming
up
towards
the
end
of
112
vaguely
that
we
are
still
in
the
process
of
deprecating
eep
stirrer
and
in
1.13
heap
sir
will
be
officially
considered
fully
deprecated
right
now
we
are
in
bug-fix
only
mode,
so
if
you
have
not
yet
switched
over
to
metric
server
or
in
an
external
metrics
pipeline.
This
is
a
reminder,
too,
that
you
may
want
to
get
on
that.
A
Okay,
two
things
I
forgot
number
one
I
was
supposed
to
mention
that
my
employer
is
Google
up
top,
but
I
think
you
all
probably
know
that
number
two
I
was
supposed
to
tell
you
all
to
mute.
The
people
are
speaking,
but,
coincidentally,
you
all
did
that
already
so
big
hand
to
everybody.
This
is
the
quietest
non
nosiest
meeting
I've
ever
attended.
Thank
you
all
so
much,
but
definitely
I
want
to
think
that
my
muting
minions
for
helping
out
on
that
super
appreciative
and
with
that
I,
will
give
you
seven
minutes
of
your
lives
back
thanks.