►
From YouTube: Kubernetes - AWS Provider - Meeting 20200110
Description
Recording of the AWS Provider subproject meeting held on 20200110
A
Hello,
everybody
and
welcome
this
is
the
AWS
bi-weekly
meeting
for
kubernetes.
We
reminder
this
meeting
is
being
recorded
and
will
be
put
on
the
Internet
and
therefore
completely
mindful
of
our
code
of
conduct.
The
date
is
Friday
January,
10th
2020.
We
have
a
couple
of
things
on
our
New
Year's
agenda.
A
First
off
we're
gonna
have
something
from
Peter
and
then
we're
going
to
do.
Issue
review
and
prioritization
of
any
issues
came
up
in
the
repo
over
the
new
year
we've
had
did
we
did
that
last
time
we
met,
which
was
almost
a
month
ago
now
and
I
had
some
action
on
my
items
and
other
people
who
did
as
well
I.
C
C
So
we'll
stick
with
personal
goals,
then
yeah
by
the
end
of
this
year.
We
want
to
be
migrated
over,
but
I
really
like
should
speak
in
releases.
I
guess
so,
let's
see
what's
coming
up
right
now,
we're
on
Sol
117
was
December,
December
and
118
is,
is
in
the
works
now
right,
yeah.
So
so
personally,
I'd
like
to
have
a
little
bit
stronger
of
an
alpha
version
of
the
external
around
the
time.
118
goes
out,
I
think
that's
a
reasonable
goal
and.
C
Ideally,
we'd
then
go
to
beta
with
the
next
release
and
then
maybe
GA
with
the
release.
After
that,
there
is
some
considerations
with
the
machinery:
that's
in
communities
kubernetes
for
doing
actual
migrations
from
internal
to
external,
so
that
has
to
be
I
would
assume
you
know
at
least
beta
or
GA
before
we
can
actually
migrate
clusters
out.
Although
that
does
really
only
include
clusters
that
can't
have
any
downtime
you
can
take
downtime,
then
you
can
sort
of
migrate
at
any
time.
C
C
Like
I,
don't
think
they
need
to
be
and
I
think
eventually
they
shouldn't
be.
We
should
start
getting
in
a
faster
cadence,
but
initially
you
know
I
think.
The
only
reason
why
I
said
we
should
do
it
at
118
is
because
it's
a
it's
a
date
that
we
all
know
and
it
kind
of
makes
sense,
and
sometimes
we
will
be
relying
on
features
in
upstream,
like
the
the
migration
stuff.
But
you
know
we
don't
we
don't
need
to
so
there's
no
hard
requirement
to
couple
with
kubernetes
releases.
C
So
we
we
had,
we
were
in
a
kind
of
a
bad
state,
in
my
opinion,
where
we
had
originally
there
had
we
I
guess
there
was
a
push
to
get
this
stuff
done
like
last
year
and
and
the
way
that
it
was
decided
to
do.
That
was
to
just
copy
the
cloud
provider
code
over
to
the
external
repo
and
then
they
quickly
diverged.
C
A
B
A
A
D
A
A
The
only
other
thing
or
the
next
only
other
thing
we
have
an
or
agenda
is
the
regular
issue,
review
process,
issue,
review
and
prioritization
as
just
by
Nick.
If
there
are
other
things,
please
do
speak
up
or
add
things
to
the
agenda.
Otherwise,
I
think
we
will
dive
into
that.
I'll
get
four
people
to
have
a
minutes
or
seconds
to
speak
up.
A
A
C
A
A
A
A
A
It
was
NLP
yeah,
Alex,
shemesh
wicks
had
previously
used
and
it'll
be
in
the
last
comment.
C
A
A
A
A
Re
important
soon
yeah,
it
feels
pretty
I
just
tagged
it
as
such
I
cheated
and
use
the
drop-down,
and
so
try
to
remember
the
magic
words
I
should
I
will
look
up
the
magic
words
while
we
do
the
next
one
that
so
taxes
a
bug.
It's
time
to
see
cloud
provider,
an
area
AWS
and
it's
hard
important
see
when
she
feels
right.
I,
don't
know
that
that
feels
like
the
right
place
to
prioritize
it.
C
A
Guess
if
anyone
has
a
if
anyone
has
is
running
one
of
these
and
hasn't
run
them
for
a
while,
it
would
be
super
helpful
to
fred
to
open
up
their
cluster
and
see
whether
they
have
a
bunch
of
I
guess,
offline
targets
or
something
right
or
I
mean.
If
these
targets
were
still
there,
they
would
presume
to
be
health,
checked
and
sort
of
sitting
in
an
odd
state.
I.
B
C
A
A
Definitely
needs
more,
it
definite
need
more
investigation
and
checking.
But
yes,
we
should
either
way
well
yeah
if
someone's
labeling
another
other
instances
in
a
way
to
mix
it
up.
That's
I,
don't
to
say,
use
error
but
like
we
can
see
what's
going
on
there,
but
yeah
that's
one
possibility.
But
then,
if
it's
a
bug
in
our
side,
then
that's
that's
a
much
more
serious
thing.
Yeah.
C
A
Is
the
service
controller
pulls
into
the
cloud
provider
all
right
so
that
so
the
service
controller
has
in
obese,
I
guess
and
the
service
controller
actually
contains
the
list
of
nodes,
so
I
I,
don't
think
the
user
could
have
tagged
these
things
incorrectly.
So
if
I
recall
correctly,
the
service
controller
passes
in
the
nodes.
It
isn't
like.
It
lists
all
instances.
B
A
E
A
A
A
This
is
true
it.
It
should
be
noticed
when
you,
eventually
it
will
reconcile
and
it
should
notice
at
that
time,
a
way
to
force
that
is
to
restart
like
controller
manager,
and
so
there
are
two
things:
I
guess.
If,
when
you
restart
coop
controller
manager,
it
does
not
refresh
that's
a
bug
the
other
like,
but
I,
don't
I,
don't
believe
that
to
be
the
case.
A
Don't
think
it
has
a
periodic
reconcile
for
other
things,
I,
don't
think
like
every
10
hours
or
every
hour
it
does
it
I,
don't
believe,
but
I
don't
want
to
say
that
across
the
board-
and
we
certainly
could
add
something
like
that,
but
we
haven't
added
that
to
date,
I
don't
know
whether
whether
we
should
like
why
did
someone
delete
the
ELB?
I
guess
is
the
underlying
question.
A
A
A
Do
it
every
five
minutes
and
just
read
them
all:
we
we
have
generally
tended
to
do
per
per
object,
reconciliation
and
I'm,
not
sure,
that's
optimal
from
an
API
usage
standpoint.
It
might
be
better
to
list
all
the
objects
all
the
time
on
a
like
schedule
that
at
least
every
minute,
but
then,
when
we're
actively
like
creating
a
volume,
pull
more
often,
but
still
just
pull
them
all,
but
that
is
a
very
big
change
and
I
think
it
probably
also
differs
I
think
the
answer
of
where
that's
optimal.
C
D
D
A
C
A
A
C
D
A
C
C
A
D
A
C
A
E
E
Seen
the
Clinton
bug
stuff
and
claimed
or
commented
on
the
seven
five
five:
seven,
that's
security.
Your
one
I
was
able
to
replicate
that
bug
on
an
older
version
on
113,
so
yeah
I
can
try
and
you
went
up
which
one
was
that
there
should
be
seven:
seven,
five,
five
to
seven,
seven,
seven,
five,
five,
seven,
the
ones
bottom
yeah
yeah.
A
E
A
Don't
think
anyone
is
expecting
that,
like
you're
signing
I
was
commit
to
fix
it
I
think
it's
more
just
like
peer
pressure
holding
each
other
like
yeah.
Why
don't
you
work
on
this
one
and
so
we'll
see
and
we'll
see
if
we
can,
like
collectively
like
start
to
chip
away
at
it,
I
think
that
would
be
awesome.