►
From YouTube: Kubernetes sig-aws 20180824
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
However,
on
this
is
sick
AWS,
it
is
August
24th
I
am
your
co
moderator
for
the
day,
just
in
Santa
Barbara.
We
are
recording
this.
So
please
be
aware
of
that.
The
agenda
I
will
paste
it
into
the
into
the
zoom
chat,
but
please
do
put
things
on
there.
If
you
would
like
to
discuss
them,
we
don't
have
a
ton
on
the
agenda,
but
it's
good
for
the
record
to
have
those
things
and
if
we
do
get
stuff
on
agenda,
it's
good
to
make
sure
that
we
get
to
everything.
A
B
Sure
so,
two
weeks
ago,
I
nominated
I
announced
my
intention
to
step
back
and
nominated
Nishi
to
replace
me.
We
had
a
vote
in
the
meeting,
whether
to
gather
whether
all
the
attendees
were
in
support
of
it.
They
were.
Nobody
objected.
We
agreed
to
post
the
PR
as
our
public
record.
There
were
some
subsequent
concerns
about
whether
a
PR
was
sort
of
sufficient
with
sufficient
notice
or
not.
B
So
we
agreed
to
leave
the
PR
open
for
public
comment
for
a
period
of
time.
My
suggestion
was
that
we
leave
it
open
from
one
cig
meeting
to
the
next
and
I
think
we're
at
that
point.
So
that
would
be
this
cig
meeting
so
I'm
going
to
request.
We
have
a
follow-up
revote
and
we
will
have
voted
on
it
twice.
B
We
have
had
public
record
open
for
two
weeks
plus
there's
been
I've,
had
a
just
kind
of
off
line
had
a
couple
of
chats
with
steering
committee
members
who
are
kind
of
acknowledging
that
there
aren't
actually
standards
for
what
constitutes
public
public
notice.
My
my
position
was
that
well
like
we
do
lots
of
stuff
in
the
open
by
OPRS
and
that
ought
to
be
quit.
I
think
there
is
some
project
wide
discussion
about
whether
that
is
indeed
adequate
or
not,
but
nevertheless,
I
think
we're
in
totally
good
shape.
B
I
think
waited
as
long
as
we
have
so
I
want
to
move
that
we
revote
on
it.
What
we
did
last
time
by
the
way
so
Chris
Nova
around
the
boat
last
time.
Maybe
you
should
run
the
vote
this
time,
Justin
just
to
be
super
clean,
but
what
we
did
last
time
was.
We
asked
everybody
to
vote
in
the
chat
as
well.
I
need
to
you,
know,
put
a
plus
one
or
minus.
A
A
Great
yes,
so
please,
please
put
a
plus
1
or
a
minus
one
or
whatever
in
the
in
the
chat.
If
you
don't
feel
comfortable,
putting
a
public,
1
or
+1
then
feel
free,
you
can
you
can
DM
me
in
zoom
and
I
will
like
note
it
and
follow
up
with
you.
I
will
I
will
say
one
thing
which
is
like
the
some
of
the
confusion.
I
think
is
probably
my
fault.
A
Cool,
why
don't
see
any
negative
votes
at
the
moment?
So
we
we
can
intensively
say
that
that
is
a
thumbs
up
and
if
we,
if
anything
happens
before
the
end,
the
meeting
that
we
can
reverse,
if
otherwise,
I
think
that
I
think
we
have
plenty
of
consensus
and
plenty
of
notice,
and
there
was
also
a
thread
on
Sega
WS
mailing
list
as
well.
So
I
think
we
I
think
we've
covered
every
possible
procedure.
We
can
think
of.
B
A
C
A
A
But
I
otherwise,
I
approve
that
message.
Yes,
all
right,
so
we
have.
We
have
three
things
left
on
the
agenda
for
today.
I'll
give
a
quick.
Why
don't
we
do
DNS
because
I
it's
next
on
the
agenda
in
the
list
and
it
might
be
of
interest
to
a
lot
of
people.
So
there
are
ongoing
discussions
generally
in
kubernetes
about
long
DNS,
timeouts
or
basically,
DNS
takes
a
long
time
to
resolve
inside
the
cluster.
A
I've
started.
Looking
at
this,
like
there's
a
there's,
a
good
like
DNS
tester
program,
which
I
think
Thomas,
Schaaf
or
Tom's
graphic
can't
remember,
which
but
posted
on
it
is
relatively
easy
to
reproduce,
even
on
basically
on
all
configurations,
even
on
cube
DNS.
So
it's
not
even
a
CNI
issue.
I
personally
haven't
seen
the
IP
tables
issue,
but
what
I
have
seen
is
it
does
seem
to
be
more
likely
on
AWS,
then,
on
the
other
cloud
on
which
I
am
testing
for
reasons
that
aren't
yet
entirely
clear
to
me.
A
There's
also
a
difference
between
like
the
cops
on
GCP
versus
gke.
There's.
Definitely
a
software
component
involved,
but
that's
sort
of
the
the
the
cops
on
AWS
versus
cops
on
GCP,
like
the
difference,
is
much
much
more
noticeable
than
the
cops
on
DCP,
vs,
gke.
So
I
think
it
feels
like
there's
some
configuration
type
issue
on
AWS
and
it's
not
clear
to
me
whether
to
the
extent
to
which
it
is
the
UDP
reliability
versus
some
software
bug
versus
some
configuration
issue
with
the
cop
setup.
A
I
don't
know
if
anyone
has
any
input
but
I
think
I
think
in
general.
The
way
that
people
are
pursuing
this
is
that
we're
likely
to
end
up
with
some
form
of
local
node
agent,
so
that
we
can
basically
avoid
a
UDP
avoid
you
to
be
packets
on
the
network
and
it
also
avoids
it
should
sidestep
some
of
the
IP
tables
issues,
and
it
should
allow
us
to
do
better
retry
policies
as
well.
A
D
Have
any
knowledge
of
why
it's
different,
but
we
definitely
came
across
that
and
our
solution
was
to
run.
You
know
a
DNS
mask
for
whatever
the
deepness
solution
is
on
every
worker,
which
you
know
our
standard
practice
on
VMs
would
be
to
run
a
local
caching,
dns
server,
anyways,
so
I.
That
seems
like
the
right
default
to
have
the
other.
The
other
thing
you
know,
there's
only
so
many
requests
for
packets
per
second,
you
can
send
to
a
DNS
server
in
Amazon
from
a
single
host.
D
A
That
makes
a
ton
of
sense.
I
I
will
say
that
I
also
see
it,
even
even
with
tests
that
shouldn't
even
leave
the
cluster
that
are
like
with
internal
DNS
name.
So
there
is
something
there
are
many
layers
right
like
another
factor
is
like
if
the,
if
the,
if
your
cloud
providers
resolver,
is
throttling,
you
you'll
see
failures
as
well,
and
we
are
sending
like
six
times
more
DNS
requests,
at
least
than
you
might
expect.
A
D
A
I
think
the
I,
like
the
DNS,
mask
approach,
I,
think
my
understanding
is
the
OpenShift
runs.
Kubb
dns
effectively
is
a
daemon
set,
which
is
like
a
nice
approach
for
that
as
well,
although
like
much
mean
much
more
resource
heavy
than
just
running
DNS
mask
on
each
node,
but
anyway
in
general,
it
seems
like
there
is
a
general
kubernetes
wide
initiative,
most
mostly
driven
by
a
DBMS.
It
seems
rate
of
ask
users
to
put
a
some
form
of
resolving
some
form
of
proxy
on
each
node
yeah,
which
I
think
would
be
a
good
thing.
D
Just
really
quick,
so
I
I
actually
haven't
had
much
time.
I
had
other
priorities
shift
around
to
work
on
some
of
the
beta
target.
Beta
work
for
NLP
for
this
next
release
code
freezes
in
a
little
under
two
weeks.
So
there's
a
couple:
there's
I
put
the
link
to
the
feature
ticket
and
there's
kind
of
a
list
of
bug,
fixes
and
features
that
we'd
like
to
have
added.
So
if
anyone
has
time
and
once
do
you
contribute
to
those
that
would
be
great
like.
D
A
A
A
question
actually
that
came
up
there's
an
issue
about
mapping
weather
so
currently
load
bamaca.
If
you
have
a
service
of
type
load,
balancer,
there's
a
hard-coded
logic
in
kubernetes
to
exclude
the
master
master
nodes.
Basically,
we
sort
of
always
did
that.
Historically,
there
was
a
regression
where
it
stopped
working
and
some
people
filed
bugs
saying
that
they
thought
that
was
a
bug
which
it
was
certainly
a
regression.
A
There's
now
discussion
of
whether
you
know
we
should
be
special
casing,
the
master
I
think
probably
we
should
but
I
think
there's
also
there's
also
a
debate
about
like
if
you
actually
put
a
service
of
type
external
traffic
policy,
local
or
whatever
it
is
so
you
and
you
put
a
service.
Then
you
put
pods
on
that
master
node,
explicitly
whether
it
should
go
to
this
NLB
I'm,
wrong
and
I'll
be
still
doesn't
go
directly
to
the
pod.
Does
it
it
still
is
the
normal
flow.
D
Right,
that's
because
the
service
interface,
the
are
the
specifically
the
load.
Balancer
interface,
doesn't
update
services
on
endpoint
changes.
It's
only
on
no
changes
so
with
like
AWS,
C
and
I
provide
a
kiss
or
a
kubernetes
CNI
provider
where
the
VIP
is
in
the
V
PC.
That
would
be
really
great
to
have
a
target
type
where
of
IP
or
of
the
pot
IP,
not
just
the
note,
IP
and
using
cue
proxy,
but
the
current
yeah.
D
The
current
interface
doesn't
allow
that
I
did
see
that
the
Google's
latest
load,
balancer
I
forget
what
it's
specifically
called
now.
It's
recently
that
that
has
the
same
sort
of
thing,
but
it
looks
like
they'd
be
using
an
alternate
controller
for
that.
So
I
don't
know
if
that
might
be
more
of
a
signet
working
discussion
to
talk
about
updating
that
interface,
maybe
see
cloud
provider
combination,
but
I
think
that's
definitely
something
that
we
could
explore.
Okay,.
A
D
And
one
other
point:
you
brought
up
the
point
about
like
sort
of
masters
being
targeted,
ideally
like
right
now,
there's
also
an
annotation
for
excluding
specific
nodes
from
load.
Balancer
traffic,
but
that's
currently
like
applies
to
all
load
balancers,
which
is,
unless
you
again
using
target
or
external
traffic
policy.
Local,
like
it's
just
sort
of
a
very
like
broad.
D
Not
a
fine
find
tool
that
you
get
to
use,
so
it
would
be
really
nice
I,
don't
know
exactly
what
that
would
look
like
if
that's
an
annotation
on
nodes
for
specific.
You
know,
service
names,
free
services
to
say,
like
this
service
shouldn't,
go
to
this
node
or
not,
but
it
would
be
great
to
come
up
with
a
solution
for
yeah
routing
traffic,
to
specific
nodes
for
specific
services.
A
My
guess
is:
that's
gonna
be
external
traffic
policy.
Maybe
we
can
leverage
external
traffic
policy
local.
Maybe
we
need
a
new
value,
but
I
think
that
I
hope
that
covers
all
of
the
use
cases,
but
yeah
I
would
be
great.
I'm
gonna.
Guess
we're
gonna
hammer
some
of
this
out
in
that
master
issue,
because
I
agree
with
you.
It
is
a
very
big
hammer
to
do
the
note.
A
C
A
Absolutely
I
think
I
think
the
the
use
case
that
was
presented
was
people
with
a
single
node
kubernetes
cluster,
like
one
machine
that
is
running
as
the
master
and
they
were
running
the
user
workloads
and
then
because
that
they
they
use
a
particular
user,
had
had
labeled
their
node
and
tainted
their
node
as
a
master.
Yes,
well,
maybe
they
just
label
as
a
master
and
then
obviously,
then
they
weren't
getting
any
traffic
to
that
node
because
it
was
a
master.
C
Yes,
I
can
only
think
of
a
scenario
where
you're
running
a
large
database.
It's
a
scale
up,
you
know
environment
and
then
that
database
has
a
web
app
attached
to
it
and
you
testing
I,
don't
know
how
many
production
scenarios
would
have
this
I
just
wanted
to
gladder
Dion.
What
other
use
cases
we've
seen:
I
I.
B
Think
there
is,
there
is
interest,
even
in
like
H,
a
type
scenarios
to
bring
up
three
nodes.
Three
bigger
nodes
like
whether
this
is
overall
good,
operational
practice
or
not
is
certainly
compatible.
But
there's
definitely
users
who
are
interested
in
running
like
three
nodes
and
then
using
those
nodes,
as
both
a
control
plane
and
its
workers.
B
B
No
I
think
it's
just
really
a
cost
thing.
People
are
trying
to
cut
costs
as
much
as
they
can
in
some
scenarios
and
you're
right.
It's
probably
mostly
devtest
type
things
but
I
think
it
probably
especially
for
smaller
companies,
may
not
be
right.
They
may
just
be
trying
to
squeeze
everything
they
can.
C
C
Like
I
correct
me,
if
I'm,
if
I
understood
you
wrong,
but
we
were
also
evaluating
the
same
scenario
for
ELB
ingress
controller
implementation,
one
of
the
things
that
I
didn't
realize
when
thinking
about
how
a
target
group
resolves-
or
rather
a
service
name,
resolves
two
pods
behind
that
service
in
an
alb
ingress
scenario,
is
it
basically
searches
where
the
pod
is
running
by
looking
at
all
the
nodes
and
then
using
node
port?
The
packet
is
then
routed
over
to
the
specific
pod.
C
So
we
were
thinking
of
modifying
the
target
group
annotation
to
put
a
pod
IP
as
well
because
of
the
CNI
reason,
but
then
that
limits
the
number
of
pods,
or
rather
the
target
group
to
the
number
of
parts
within
the
cluster.
But
if
we
keep
it
to
node
IP,
then
it
actually
allows
us
to
have
multiple
nodes
within
the
cluster
and
hence
the
scale
of
what
an
ingress
controller
can
do
expands.
E
C
D
C
And
so
we
should,
because
these
are,
these
concepts
are
tied
to
NLB,
which
is
a
physical
infrastructure
in
alb,
which
is
a
physical
infrastructure
to.
We
should
be
clearly
communicating
to
customers.
What
are
the
scale
limits
and
what
are
they
going
with
one
versus
the
other
Part
IV?
This
is
the
node
IP.
D
I'll
comment:
early
click,
yeah,
there's,
there's
definitely
scale
concerns,
depending
on
like
the
cloud
implementation
like
how
you
know
how
fast
a
classic
little
cloud
load
balancer
can
react
and
how
like
API
limits
cause
are
much
more
ephemeral
if
you
have
a
pod
and
crashes
back
off
and
keeps
flapping
or
something
then
yeah
what
what
what
are
you
gonna
do
about,
or
what's
that
going
to
do
to
the
API,
then
it's
that
kind
of
thing
it's
it's
kind
of
to
be
explored.
It's
it's
more
conceptual
at
this
point
like
there's
not
been
an
implementation.
A
A
Very
clever,
I
think
anyway,
shall
we
move
on
Seth.
You
have
an
item
API
for
a
diverse
instance.
Type
info
CPU
memory
max
pods,
which,
as
Leah
pointed
out
might
be,
would
be
helpful
for
cluster
API,
but
I
think
this
is
the
indeed
the
right
form
to
ask
whether
we
what
the
whether
the
whether
there
is
such
an
API
and
where
it
is
as
it
were,.
G
Yeah
yeah
there's
a
there's,
a
bunch
of
projects
that
actually
could
use
that
the
cops
itself,
the
cluster
autoscaler
they
all
have
like
they're
all
currently
working
with
either
hard
coded
information,
about
instance,
types
for
CPU
memory,
max
pods
stuff,
like
that
and
others
are
pulling
it
from
the
price
api,
which
is
not
the
greatest
place
to
get
that
information
from.
So
I
was
just
wondering
you
know
if
there's
a
better
place
or
some
sort
of
you
know,
solution
to
that
problem.
So.
G
D
This
is
something
that
we've
definitely
felt
this
exact
pain
for,
like
look,
you
call
that
specifically,
the
max
pods
like
Eni
is
number
of
IPC
and
I
kind
of
thing,
and
this
is
something
that
I
mean.
This
is
really
good
feedback.
That
I
think
we
can
take
to
other
other
service
teams
and
try
to
see
if
we
can
come
up
with
something
for.
A
E
A
E
D
A
E
But
that
could
be
calculated
from
limits,
but
there
is
no
limits
API
right,
because
I
mean
monks
board.
It's
kind
of
like
it's
very
specific
to
that
implementation.
I
would
argue
that
math
squad
shouldn't
have
to
be
a
first
class
attribute
because
it
just
happens
to
be
the
way
that
the
C&I
provider
works
right.
Yeah.
D
Ip
is
Andean
eyes
per
IP
that
yeah
that's
that's
kind
of
the
thing
we'd
want
to
see,
but
I'm
curious.
Two
years
we
like
to
know:
what's
this
sort
of
the
use
case
in
terms
of
CPU
and
memory,
because,
typically
you
might
have
a
cube
lit
that
would
have
you
know,
reserved
some
slice
off
of
that
so
yeah,
do
you
have
any
comment
on
that?
Hey.
G
Yeah
I
mean
the
the
cluster
autoscaler
needs
that
information
for
scaling
from
zero.
So
normally
it
would
pull
the
information
from
the
live
node,
but
when
you
want
to
scale
from
zero
in
order
for
it
to
do
the
scheduling
decisions
and
needs
to
know
that
information
beforehand
and
I
think
cops
needs
that
information
as
well
to
set
up
new
nodes.
Yeah.
A
I
think
ops
needs
it,
for
we
definitely
do
it
for
the
for
the
disks.
We
need
to
know
what
disks
to
map
and
I
think.
We
also
need
to
know
whether
it's
of
a
particular
instance
type
for
some
other
thing
like
whether
it's
a
I
think
where
they're
taking
in
a
DNA,
accelerated
type
or
not
I
think
there's
another
flag.
We
have
to
pass
depending
on
the
instance
type
if
I
recall
correctly,.
A
G
A
E
Another
another
actual
use
case,
I
think
pretty
legit
it'd
be
if
I
wanted
to
expose
it
through
a
UI
of
some
sort.
So,
let's
say
or
like
whether
I'd
see
Li
or
or
or
an
actual,
you
do
we
but
like
if
you,
if
you
wanted
to
do,
ask
user
to
like
select
some
instance
tiles,
and
they
don't
remember
what
the
instance
types
are
and
like
what?
What
are
they
actual
CPU
memory?
That
does
instance,
that's
offer
they'd
be
nice
to
have
that
idea.
A
C
I
was
an
action
item.
From
last
time,
I
raised
an
issue
to
request
moving
the
code
that
we
have
for
CSI
driver
as
a
sub-project
to
say
AWS
so
I'd,
like
so
Yan
Hemans
D
and
our
team
were
working
on
the
code
and
we'd
like
to
fix
some
testing
for
the
code
as
well
and
offer
that
infrastructure
out
to
all
the
sub
projects.
In
sake
aid
of
years
so
I
raised
the
issue.
I
would
like
some
comment
on
it
and
I'd
like
to
close
it
as
soon
as
possible.
C
C
Essentially,
the
confusion
was
CSI.
Drivers
today
are
not
maintained
really
well
and
as
far
as
I
can
see,
gcp
has
a
sub
project
for
CSI
driver
that
they
maintain
and
OpenStack
is
the
only
one
that
is
actually
moved
all
their
code
under
set
out
providers.
So
as
long
as
they
get
up,
yes
is
there,
and
there
are
sub
projects
I'd
like
to
move
the
CSI
driver
there
and
also
provide
infrastructure
so
that
we
can
hook
up
ci
Travis
for
now
and
start
all
our
end-to-end
testing
for
every
sub
project,
I
think.
A
A
So
if
we
want
to
move
it
under,
I'd
want
to
create
a
kubernetes
sig
project,
I
think
we
semi-official
public
procedure
for
that,
which
is
someone
proposes
it
which
you've
just
done
verbally
in
the
in
the
sig
meeting,
and
we
discussed
the
idea
of
having
a
news,
sig
repository.
So
in
this
case
it
would
be
kubernetes,
SIG's,
AWS,
CSI
driver,
yes,
that
nature
yeah.
C
A
Anyone
feels
strongly
one
way
or
the
other
that
we
should
or
should
not
do
this.
The
only
thing
I
can
possibly
think
of
as
a
as
a
sticking
point
is
whether
we
want
separate
repos
for
the
CSI
driver
for
this
CNI
driver
for
the
cloud
provider
or
whether
we
want
to
sort
of
put
them
all
under
one,
but
I
think
we're
seem
to
be
doing
separate
repos.
So
if
that's
what
the
other
people
are
doing
so
GCP
is
doing
that.
Maybe
we
should
do
the
same
thing.
I.
C
But
it's
not
enough
and
I
think
it
would
make
sense
to
add
all
the
tests
for
the
sub
projects
within
Sega
WS
to
start
off
with
and
then
we're
working
with
Aaron
Benjamin
elder
cold
Wagner
to
figure
out
how
to
streamline
all
the
CI
tools
and
absorb
all
these
tests
into
desperate,
but
that'll
take
a
long
time
and
so
I'd
like
to
have
some
healthy,
sub-project
testing
done
within
say,
get
up.
Yes
to
start
with.
A
Would
hope
everyone
is
on
board
with
more
and
better
testing,
and
but
yes,
the
if
anyone
does
anyone
else,
have
any
objections,
I
guess
to
moving
the
yes
I
or
creating
a
new
kubernetes
sig
repository
for
the
US
CSI
driver.
That
would
be
a
another
sig
owned
project
by
this
sakes.
I
get
it
is.
If
you
do
have
an
objection,
please
or
any
reasons
concerns
then
do
please
speak
up,
but
I
think
what
we
normally
do
at
this
stage,
I'm
giving
people
time
to
talk.
A
Assuming
that
no
one
wants
to
say
anything
then
I
think
we
basically
do
a
little
proposal
to
the
steering
committee
or
the
oh,
the
wheat.
We
should
do
a
we're
gonna
end
up
with
a
proposal
to
the
steering
committee,
which
states
like
the
name
of
the
new
repo
who
the
initial
owners
are
gonna,
be
what
its
gonna
be
in
there
and
so
I
guess.
If
we
recirculate
that
to
the
see
addressed
list,
then
people
have
opportunity.
A
People
that
are
not
in
this
meeting
will
have
opportunity
to
comment,
and
then
we
can
send
it
to
the
steering
committee
and
I
would
hope
that
that
would
be
a
fairly.
It
feels
to
me,
like
a
very
natural
state
project
for
this
sig
Joe,
and
especially
if
it's
someone
in
someone's
personal
repo
today
I
think
that
will
be
a
good
move.
Thank
You,
Misha.