►
From YouTube: Kubernetes sig-aws 20190405
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
it
is
Friday
April
5th.
This
is
Sig
a
dress.
I
am
your
host
and
moderator,
just
in
Santa,
Barbara
I
work
at
Google,
a
reminder.
This
meeting
is
being
recorded
and
will
put
on
the
internet
and
please
respect
our
code
of
conduct
and
be
a
good
person.
I'm
gonna
paste
the
link
to
our
agenda
in
Christmas
a
redundant.
Thank
you
Christian,
facing
a
link
to
our
agenda
in
the
notes.
If
you
do
want
to
add
in
these
agenda,
please
feel
free
to
do
so,
so
we
can
be
sure
there
cover
it.
A
If
you
would
like
to
put
your
name
in
there,
that
is
often
helpful
for
people
that
want
to
know
who
you
are
when
specially
when
they're.
Looking
at
the
video
later
on-
and
yes,
we
have
a
couple
of
things
on
the
agenda.
I
think
first,
one
is
I,
did
not
reach
out
in
time
to
schedule
a
demo
I
apologize
for
that
we
do
have
two
things
in
the
queue.
So
hopefully,
next
time
in
two
weeks
we
will
be
able
to
start
we'll
have
a
demo
again.
A
The
things
in
the
queue
right
now
are
O'meara,
giving
a
demo
of
kamas
a
secret
encryption
solution,
and
the
deer
will
give
us
a
demo
of
the
cluster
API
AWS
with
which
sounds
great.
If
you
do
want
to
give
a
demo
of
anything,
you
were
working
on
that
relates
to
a
diverse,
please
pop
it
into
the
backlog
queue
and
then
we
can
I
will
hopefully
be
more
proactive
and
get
us
working
on
that
better
first
item
on
the
so
other
than
we
don't
ton
of
stuff
on
the
agenda.
A
B
Do
so
yeah,
so
we
definitely
the
I.
Think
here
is
definitely
super
neglected
right
now,
I
sort
of
stopped
working
on
the
project
a
while
ago,
quite
a
long
time
ago,
and
have
just
been
you
know,
popping
in
for
pull
request,
reviews
every
once
in
a
while.
So
but
I
am
attempting
to
sort
of
come
back
a
little
bit
and
hopefully
I'm
actually
so
that
alpha
release
that
you
spoke
of
is
I,
think
Oh,
400
or
something
so
that's
pretty
much
ready
to
turn
into
a
stable
release.
B
There's
one
pull
request
that
I'm
actually
reviewing
right
now,
which
is
a
super
nice
caching
of
NFA
tokens,
which
has
been
a
pretty
annoying
for
quite
some
time,
so
I
can't
work
on
it
full-time
and
that's.
Actually,
this
is
a
perfect
segue
into.
It
would
be
great
to
have
some
some
renewed
interest,
and
you
know
if
anybody
can
devote
some
of
their
time
for
reviews,
or
you
know
eventually
be
Co
maintained
errs.
B
C
A
A
A
B
A
A
A
E
Hello
to
you
all
hear
me:
yes,
we
do
a
small
team
in
Germany,
Dusseldorf
Fox
the
autoscaler
somewhat
two
years
ago,
or
something
because
we
we
wanted
to.
We
are
evenly
user,
EP
users
from
abs
and
we're
using
binstock
a
lot
and
we
somewhere
two
and
a
half
years
ago
we
decided
to
go
on.
You
know
on
my
own,
with
combinators
on
top
front
ec2,
and
we
came
in
the
problems
that
autoscaler
wasn't
supporting
abs,
so
we
had
to
to
work
on
something
so
that
you
can
grow
and
shrink
a
bit.
E
E
E
E
We
wanted
to
make
an
MVP,
as
makes
as
little
changes
as
possible
to
get
it
through
quite
rapidly,
but
it's
it's
quite
some
code
and
in
the
one
and
a
half
years
there
are
some
expectations
that
came
on
top
like
fleet
groups
from
AWS
AWS,
and
we
have
absolutely
no
support
for
it
and
I
wrote
documentation
today
and
I,
don't
know
it.
It's
quite
fits
on
these.
E
We
could
make
it
work
with
it,
but
one
problem:
what
I
see
is
you
can
define
a
group
with
with
a
very
big
mix
of
different
machines,
and
you
would
have
to
look
at
this
at
machines
with
the
same
CPU
counts,
the
same
memory
counter.
So
aren't
you
in
order
to
work
with
kubernetes,
you
couldn't
put
something
with
two
cores
and
an
awesome
machine
with
32
or
something
because
kubernetes
could
not
scale
up
effectively
and.
E
A
E
C
This
is
Mike
from
AWS.
We
have
been
looking
at
that
PR.
We're
actually
really
keenly
interested
in
adding
spot
support
and
fleet
support
to
the
cluster.
Autoscaler
I
think
it's
something
that
we
want
to
push
forward.
It's
there
are
a
couple
of
things
that
I'd
like
to
kind
of
discuss
with
you
offline.
Is
there
a
way
I
can
contact
you
on
on
slack
or
yeah.
C
C
D
So
right
now,
yeah,
actually
I
recommend
you
have
come
to
see
a
device
to
talk
about
this.
It's
in
the
thicker
auto-scaling.
You
don't
have
a
like.
A
lot
of
like
users
have
the
case.
So
I
asked
him
to
come
to
see
us,
because
here
we
have
walk
like
a
the
best
on
the
area
and
the
stories
like
this
so
right
now
transfer
auto
sterling
each
note
boobs.
We
assumed
it
has
one
instance
time
last
year.
It
relates
the
two
major
features.
D
Okay,
yeah
in
one
thing,
is
the
fleet
of
epi,
the
other
things
the
ASG
water,
for
instance,
type
in
purchased
upon,
and
both
of
them
has
different
in
stipes
in
one
ASG,
which
is
not
compatible
with
we
curing
the
class
developer
setting,
but
in
both
of
them
request
lunch
template
rather
than
lunch
configuration
in
curing
settings
to
them,
I'm,
not
sure
how
how
long
will
any
like
provision
post
my
query
to
those
instructions
right
now
the
to
lack
of
some
solutions
on
the
spot
instance
management?
That's
why
they
have
work
yeah.
A
That's
that's
it's
great
see,
thank
you
for
both
of
each
for
looking
at
that
and
moving
out
I.
Think
a
one
thing
I
would
love
to
I
think
we
should
try
to
keep
the
discussions
in
the
open
generally.
If
we
can
I
mean
there's
like
we
should
obviously
talk
on
slack
as
well,
but
if
it's
something
that
is
not
if
it's
something
that
you
think
can
be
productive
in
the
open,
I
think
we
should
feel
like
we
can
use
time
and
to
give
us
to
do
that.
A
If
that's
helpful
particularly,
you
know,
we
can
always
schedule
it
like
at
the
end
of
the
meeting,
so
that
people
that
aren't
interested
can
attend
attend
as
they
want
to
the
I'll.
Also
take
a
look
at
the
PR
I
think
yawn.
It
sounded
like
you
said
that
you
had
looked
at
fleet
and
there
were
some
limitations,
I
guess
or
differences
in
approach,
and
you
had
you
wrote
that
up.
Did
you?
Did
you
write
that
up
in
public
or.
E
Fits
the
natural
mechanism
known
in
fun,
Vanitas,
okay,
it's
my
I
need
to
Fleet
would
be
very
fine
because
we
have
the
issue
with
some
spots
groups
that
are
sometimes
not
able
to
scale,
because
the
measuring
machines
are
not
available.
So
it
would
be
fine
to
have
something
that
sets
that
that
kick
in
to
put
on
on
another
group,
but
I
also
had
some
ideas.
How
I
could
end
the
autoscaler
to
get
this
this
functionality
to
work?
E
We
could
look
at
the
outer
scale.
Spot
instance
marked
over
the
API
and
see
which
machines
are
not
available
anymore
and
puts
for
this
machine
a
price
with
infinite
so
that
it
don't
get
taken
by
the
autoscaler.
It
would
be
possible
to
build
in
this
functionality
directly
right
now.
We
are
very
pleased
with
the
way
it
works,
because
we
can
define
the
groups
that
the
cluster
is
using
or
productive.
E
The
only
advantage
from
fleets
would
be
for
us
to
to
be
able
to
switch
to
another
spot,
a
group
if
someone
Assad
one
of
them
is
not
available,
but
as
I
say,
we
are
running
on
production
for
one
and
a
half
years
with
this
solution,
and
it
works
quite
well.
We
are
not
pleased
with
the
code.
I
guess
there's
a
lot
to
do
on
it
and
in
this
one
and
a
half
years
there
have
been
a
lot
of
work
on
the
autoscaler.
E
A
A
A
Think
you've
also
done
this
at
exactly
there
or
braise
this
at
exactly
the
right
time,
because
there's
also
a
discussion
in
cluster
autoscaler
about
how
we
are
eventually
going
to
move
to
use
the
cluster
API,
which
will
be
even
like
less
tied
to
cloud
infrastructure.
So
maybe
we'll
be
able
to
find
a
nice
abstraction,
I
hope,
but
we
should
at
least
think
about
it.
I
seen
a
deer
not
against
my
leg,
yeah
we.
So
this
is
a
great
time
I.
A
Certainly
I'm
gonna
read
through
the
the
PR
and
the
thousands
of
comments
on
there
and
I
apologize
that
it
took
so
long,
at
least
for
me
personally
to
get
to
it,
but
thank
you
they
do
as
people
happen,
you
know
it's
I
I
can
see
why
it's
one
of
these,
it's
one
of
the
PRS
that
crosses
different
groups
and
yeah.
It's
it's
it's
a
hard
one,
but
it
sounds
like
it's
very
valuable
from
shanti.
So
thank
you
and
I'll
have
a
look
at
it.
I
don't
have
anyone
else
wants
to
I
can
actually
I.
A
A
Why
I
don't
know
if
we've
ever
done
that?
So
it's
it's
a
good
idea,
I,
don't
know
if
we've
ever
done
it,
we
could
I
I
suggest
what
we
do.
There's
also
cluster
API,
which
is
also
going
to
come
into
this
I
suggest
we
have
a
look
at
it
and
see
where
it
fits
into
cluster
API,
and
let
me
look
at
it
and
see.
I.
Think
you
sort
of
the
answer.
I'm
sort
of
trying
to
say
is
I,
don't
I,
don't
know
right
now,
it's
it's
complicated,
but
it
could
actually
be
I.
A
Think
I.
Think
fundamentally,
the
issue
is
that
we're
putting
a
lot
more
AWS
functionality
into
cluster
autoscaler
or
a
lot
more
cloud
specific
functionality
into
yes,
orders
like
today,
the
functionality
has
been,
you
know
fairly
much.
It's
like
go
talk
to
the
cardio
guys
and
get
the
information
and
then
write
it
back
right.
It's
not
looking
at
pricing
in
history.
A
It
doesn't
really
understand
any
details
of
the
underlying
platform,
and
this
is
adding
a
lot
more
understanding
that
an
underlying
platform,
to
the
extent
that
it's
almost
a
aid
of
a
specific
autoscaler
at
this
point-
you're
right.
But
it's,
but
there
is
no.
The
cluster
autoscaler
is
not
cloud
specific.
So
it's
an
architectural
I
think
there's
there's
an
architectural
thing
as
well
and
I
think
that's.
A
We
need
to
figure
out
whether
whether
we
want
cluster
auto-scale,
where
the
cluster
autoscaler
team
wants
to
have
card
specific
cluster,
auto
scalars
and
or
whether
we
can
have
a
sort
of
controller
type
mechanism
where
this
logic
could
live
outside
of
autoscaler
and
I'm.
The
reason
I
think
cluster
api
is
relevant
is
that
it
might
be
that
cluster
api
needs
that
architecture
anyway.
So
yes,
I,
am
optimistic,
but.
F
E
And
I
had
to
put
something
else
in
in
the
PIA.
We
are
this
one
piece
of
code:
doing
the
calculation
back
from
the
different
types
of
ec2
machines
to
Milly
scipios,
to
be
able
to
function
with
the
kubernetes
api,
and
we
made
their
very
gross
assumptions
that
works
for
us,
but
is
by
no
way
generic
for
other
people,
but
it
happens
at
last
week.
E
We
had
a
story:
we
are
building
an
accounting
service
for
our
clients,
our
colleagues
in
the
in
the
company,
and
we
found
a
way
to
break
down
the
cost
of
the
cluster
to
kubernetes
units
and
to
do
it
in
a
real
way.
We
calculates
a
real
ply
price
of
the
Milly
CPU
to
any
given
point
in
time
for
the
cluster,
and
this
was
a
piece
that
is
also
not
in
this
PR
and
inside
the
PR.
But
it's
a
very
big
construct.
I
won't
see
it
right
now.
I
would
put
it
in
another
PR
and
we.
A
One
of
the
one
of
the
one
of
the
things
we've
talked
about
is
a
lot
of
the
AWS
sub
projects
end
up
building
a
database
of
instance
types,
and
it
would
be
great
to
have
that
in
a
shared
project
somewhere
or
pick
a
project
and
we
all
reference
it,
but
yeah
I
guess
we
have
to
see
the
use
case
for
exactly
the
serve
here.
Is
your
work
open
source?
Yet
the
Hat?
That's
a
big
problem.
D
E
A
That's
great,
thank
you,
the
yeah.
It
was
more
just
like
trying
to
understand
whether
like,
if
there's
code,
then
we
can
look
at
it.
We
can
discuss
it
more
clearly,
but
it's
fine
not
too
obviously,
I.
Think
that's
that's
what
if
I
think,
there's
a
lot
of
things
we
can
look
at
and
I
will
look
at
the
autoscaler
I
think
I.
Think
there's
a
lot
of
work
for
us
to
do.
I
appreciate
everything.
You've
done!
Thank
you.
Yeah,
please,
I,
know
I.
A
A
A
Includ,
yes,
so
there's
some
interesting
things
in
there
doing
the
importance
of
the
VAP
CC
a9
provider
and
the
I
am
Authenticator
so
like
some
great
information
there,
but
I
think
we
can
probably
wait
until
she
is
here
and
then
I
have
her
presented
or
go
over
that
today
or
go
over
that
when
she
wants
to.
We
have
two
other
items
we
have
done.
You
are
looking
for
an
update
on
the
universe,
encryption
provider,
release
pipeline.
A
G
I
talked
to
chuck,
who
is
part
of
the
sequester
lifecycle,
just
seeing
what
the
situation
is
in
terms
of
the
ability
to
add
an
additional
container
to
the
API
server
pot,
spec
and
that's
what's
needed
in
order
to
run
the
add
a
karai's
version
of
the
encryption
provider
that
shares
a
UNIX
socket.
It
sounds
like
he
was
optimistic
about
it
and
you
had
me
create
an
issue
for
it,
but
that
seems
like
it's
gonna
be
a
ways
out.
A
Have
good
news
the
reason
I
was
a
minute
late
or
two
minutes
late
was
because
I
was
talking
to
Tim
about
getting
those
repos
going
or
starting
to
get
them
going
this
weekend.
So
I
am
I,
am
I'm
gonna
work
on
that
we
I
think
we've
started
to
create
some
GCR,
so
docker
registries,
I,
don't
know
the
extent
to
if
you've
created
them
for
all
projects,
but
the
intent
is
that
we
will
I'm
gonna.
Add
a
GCS
bucket
for
blobs.
A
We're
gonna
have
this
distinction,
it
hasn't
happened
yet,
but
the
intent
is
that
we'll
have
this
we'll
have
a
staging
and
production
split,
and
the
idea
is
that
the
staging
will
have
a
staging
registry
and
a
staging
bucket.
They
will
be
on
GC
s
and
GC
Rio
they're,
both
Google
properties,
sorry,
but
we
have
the
funding
for
it
from
the
CN
CF.
They
will
be
run
by
the
CN
CF
and
it
it
doesn't.
A
It
shouldn't
really
much
matter
in
terms
of
delivery
of
we
have
to
pick
something
and
in
terms
of
every
of
the
underlying
artifacts
that
end-users
most
end
users
will
be
getting
them
from
the
prod
buckets.
The
prod
buckets
there
is
gonna.
There
is
a
image
promoter
which
I
think
is
if
not
approved
like
about
to
be
approved,
and
then
that
will
enable
declarative
promotion
of
images
from
the
staging
bucket
to
the
production
bucket
or
buckets
and
the
buckets
there
will
be
a
GC
our
bucket.
A
But
I
imagine
if,
if
there
is
funding
for
the
CN
CF,
there
will
be
other
buckets
at
other
registries
at
different
providers
like
ec
r,
for
example,
or
docker
hub,
or
I
don't
know
what
now
as
there's
one
is
called,
but
so
that
that
can
that
should
hopefully
start
to.
We
should
hopefully
start
making
progress
on
that,
but
I
don't
think
there
is
as
of
yet
anything,
but
I
will.
I
will.
A
I
think
the
ad
of
super
provider
is
not
my
list
of
the
ones
which
we
need
to
do,
and
I
will
I
will
make
sure
it
is
on
the
list
I'll
make
sure
it,
and
it's
good
to
know
that
you
want
to
do
both
containers
and
blobs,
because
that
that's
good
most
people
or
a
lot
of
people
only
want
to
do
containers.
So
I
think
it's
great
that
you
undo
blobs
as
well
could
have
another
use
case,
for
that
is.
G
A
There's
a
there
is
a
group
called
WG,
Kate,
infra
and
I
I
will
put
a
link
in
the
notes,
but
you
should
go
to
Google
that
the
this
isn't.
This
is
infrastructure
for
I,
guess,
storage
of
the
artifacts,
and
we
are
not
really
changing
the
story
for
like
the
CI
CD
itself,
I
guess
I
would
say
so.
If
you
use
prowl,
then
I
think
then
I
think
that
will
continue
to
you
can
see
you
can
continue
prowl
or
you
continue
to
use
Travis
or
your
preferred
CI
system
of
choice.
G
A
A
A
A
A
They
were
going
up.
I
will
double-check
why
they
are
not
going
up
if
they're
not
I'm.
Just
looking
at
the
list,
I
want
to
start
playing
videos
right
now.
Sorry
they
should
be
going
up.
They
should
going
into
the
into
the
list.
I
will
make
sure
that
this
one
goes
up
and
when
I
do
that,
I
will
look
at
the
other
ones.
A
But
yes,
they
there
is
a
YouTube
playlist
called
kubernetes,
sig
AWS
and
within
an
hour
or
two,
it
should
have
this
video
on
it
cool
and
hopefully
the
other
ones
from
2019,
because
I,
don't
I
I
thought
they
were
going
up
like
say.
If
not,
but
I
will
double
check
by
there.
They
might
maybe
I
just
forgot
to
make
them
public,
okay
cool.
It
could
be.
A
A
A
A
A
So
no
the
steering
committee
on
some
time,
whatever
the
steering
committee
meeting
is
next
week
and
I
think
it
will
be
interesting
to
see
what
happens
there
I'm
trying
to
make
it
that
whatever
organizationally
changes
that
there's
no
like
user
visibility
changes,
but
it
we
maybe
not
sig
a
divorce
but
Aidan.
This
I
want
to
call
it
COG,
AWS,
but
I
think
other
people
are
are
being
more
literal
and
I
want
to
keep
it
more
straight-laced
so,
but
it
be,
there
are
some.
A
There
are
changes
possible
changes
that
put
to
be
discussed
in
the
steering
committee
next
week.
This
would
be
across
all
table
providers,
sakes
or
provider.
Specific
sakes.
The
the
underlying
intent
is
to
make
sure
that
all
the
try
to
get
alignment
to
avoid
duplication
and
effort
across
the
cloud
providers,
Christopher
I,
do
rest
say
something.
You
know:
okay,.