►
Description
Milestone planning for Kubernetes Cluster API Provider AWS on 06/08/2020 at 1800 UTC
A
Hi
everyone-
this
is
the
cluster
ap
aw
cluster
api,
provide
aws
milestone
planning
meeting
of
august.
The
sixth,
where
please
be
advised
running
under
the
cncf
code
of
conduct,
so
be
cool
to
everyone,
use
the
racehand
feature
if
you'd
like
to
speak,
make
sure
you
put
your
name
down
if
you
want
to
in
the
meeting
notes
which
I've
dropped
in
chat,
and
I
will
do
so
again
for
anyone
else,
who's
joined
in
the
last
minute
or
two
okay.
A
So
I
think
the
first
thing
we
want
to
discuss
is
things
that
are
related
to
the
ek.
Support
for
eks,
I
believe,
is
one
of
the
reasons
this
call
has
been
convened.
So
does
anyone
want
to
fill
everyone
in
on?
What's
where
we
are
with
that?
What
things
are
outstanding?
A
B
B
Thanks
yeah,
so
there
are
basically
two
major
components
to
having
a
fully
functioning
eks
support.
The
first
is
the
control
plane,
so
there's
a
control,
plane
provider
that
is
primarily
being
worked
on
by
the
folks
at
weaveworks,
and
richard
is
here
then
the
other
component
is
the
eks
bootstrap
provider.
B
Which
the
new
relic
folks
have
a
pr
open
for,
and
we've
kind
of
done,
one
big
cycle
of
review
and
that's
still
open
on
the
control
plane
side.
There
are
a
couple
of
kind
of
interesting
things
that
have
come
up,
because
this
is
like
the
second
control
plane,
so
things
start
looking
to
get
a
little
bit
interfacey.
B
As
you
know,
this
gets
implemented.
So
there
have
been
a
couple
of
refactors
to
some
of
the
scope-shaped
interfaces
in
kappa
and
those
refactors
continue
a
little
bit
there's
some
kind
of
like
a
reordering
of
things
for
reconciling
security
groups,
for
example,
because
in
eks
land
you
get
a
new
security
group
with
your
control
plane.
So
reconciling
security
groups
before
reconciling
the
control
plane.
Is
you
can't
do
it
completely?
You
have
to
do
it
again
in
order
to
get
access
to
that.
B
So
that's
just
kind
of
an
example
of
some
of
the
reorganization
that
has
come
out
of
this
work.
The
other
major
thing
worth
highlighting
is
the
fact
that
cappy
proper
needs
to
establish
a
client
to
the
target
cluster
and
the
way
that
that
works
in
cube
adm
is,
you
know:
cubitium
generates
a
fully
valid
q
config
using
certs.
All
the
way
through
eks
doesn't
want
you
to
do
that
at
all.
They
want
you
to
use
either
the
aws
cli
or
the
aws
im
authenticator
to
establish
a
client
to
the
cluster.
B
So
the
only
way
eks
is
giving
you
a
cube.
Config
won't
work
in
the
capi
container
without
giving
it
aws
credentials
for
one
and
two
installing
one
of
those
two
binaries.
So
there
was
a
little
bit
of
discussion
about
moving
that
to
a
service
account
token
model
in
the
short
term
and
question
mark
question.
Mark
question
mark
for
the
long
term.
B
The
the
other
thing
that
came
up
kind
of
related
to
this
with
a
bootstrapping
is
we.
On
the
new
relic
side,
we
were
approaching
aws
machine
pools
from
an
eks
context
and
not
from
a
cube,
adm
context,
the
relevant
difference
being
that
refresh
logic
is
not
necessarily
a
thing
that
works
all
the
way
through
in
the
qb
provider.
So
just
kind
of
to
call
out
like
aws
machine
pools,
which
we
also
have
a
pr
for,
will
work
with
the
eks
bootstrap
provider
pretty
well.
A
Yeah,
okay,
that
sounds
interesting,
particularly
the
security
one.
I
don't
would
I
if
you
want
me
to
take
a
look
at
and
see
if
there's
anything
we
can
reverse
engineer,
then
I
don't
mind
as
well.
Richard
is
there
anything
you'd
want
to
add
on
this.
C
It
would
help
if
I'm
muted,
myself
yeah,
though
andrew
pretty
much
covered
it
all.
I
guess
there's
one
other
thing
is:
there's
a
slightly
factor
around
labels,
sorry,
the
tags
and
the
the
tags
package.
So
that's
that's
gone
for
a
couple
of
iterations
to
make
it
more
useful
and
less
ec2
specific
so
that
we
can
have
essentially
like
a
tags
dot
with
eks
and
it
does
the
eks
specific
tagging
yeah
and
the
security
groups.
C
So
we
currently
look
up
the
automatically
created
security
group
and
attach
it
to
the
status
of
the
control
plane.
But
yeah,
that's
not
enough,
and
I
realized
I
hadn't,
actually
updated
the
scope
to
return
the
security
groups
anyway.
So
that's
that's
an
area
of
ongoing
investigation.
C
I
guess
the
other
thing
that
would
impact
this
as
well.
Is
the
cluster
aws
admin
cli
and
the
required
permissions,
and
we've
got
a
good
idea
of
the
additional
I
am
permissions
required
and
then
they're
attached
to
the
ticket,
but
to
make
this
usable.
Obviously
we
need
to
get
that
change
done
as
well.
C
I
think
that's
pretty
much
it
I'll
put
the
parent
tracking
issue
in
the
the
chat
window
as
well,
so
that
links
off
to
all
the
other
sub
issues
about
what
we're
implementing,
and
I
think
we
need
to
add
a
couple
more
to
that.
That
andrew
raised
earlier
in
the
week.
A
Thanks
that
okay,
it
sounds
like
we've
got
a
good
idea.
What
needs
to
be
done
and
needs
to
plug
away.
So
I
think
for
the
last
bit.
We
just
think
about
version
numbers
and
what
things
are
going
into
each
release,
but
before
we
do
that,
I
dropped
this
spreadsheet
link.
So
we've
had
a
roadmap
document
in
the
repo
I'll
share.
My
screen
this.
A
A
So
we
have
a
doc
that
I
more
or
less
emerged
out
of
discussions
in
kubecon
san
diego.
I
think
so.
It's
pretty
informal
discussion
and
we've
not
really
probably
looked
at
in
a
long
time,
so
just
want
to
make
sure
that
we
go
through
this
and
we
we
sort
of
agree
with
this
and
maybe
sort
out
priorities
for
these
things.
A
So
one
of
the
things
thought
about
doing
is
if
we
sort
of
consider
these
in
terms
of
like
how
desirable
these
are,
feasibility
and
visibility
maybe
get
a
score
out
of
them.
So
I
think
we
just
take
them
one
at
a
time.
We
can
go
through
that
so,
first
one
being
dual
stack:
ibv4
ipv6
support,
I'm
not
sure
who's
us.
A
I
haven't
seen
many
requests
for
this
in
in
the
wild
one
of
the
reasons
to
do
this,
however,
is
we
don't
have
good
testing
for
ip
reset
things
the
kubernetes
project
overall,
and
if
we
make
cluster
api
one
of
the
canonical
testing
ways
for
kubernetes
overall
and
kappa
supports
ipv6,
we
will
get
much
better
testing
of
ipv6
and
kubernetes
in
general.
A
So
it's
not
necessarily
like
so
in
terms
of
visibility,
so
visibility
and
users
might
be
zero
in
terms
it's
not
really
visible,
but
it
might
be
considered
desirable
from
a
kubernetes
space.
I
don't
know
how
important
this
is
to
anyone
else.
Is
this
a
thing
that
anyone
else
is
seeing
that
they
need
need.
A
D
I
the
the
biggest
request
that
I've
seen
for
this
has
come
from
the
release
and
testing
team.
That's
the
most
work,
I've
seen
towards
it,
but
I
agree
with
you.
I
I
don't
necessarily
see
a
lot
of
demand
or
or
questions
about
when
we're
going
to
support
it.
A
A
Up
next
one
is
multi-tenancy,
so
we
already
have
a
pr
that's
open
from
new
relic.
I
think
it's
not
from
the
ul
capital,
one
sorry
feasibility
is
and
it
is
fairly,
but
it's
it's
reasonably
visible.
I
I
mean
this
is
no-brainer
that
we
should
just
go
ahead
and
get
this
get
this
to
completion.
A
No
network
topology,
so
this
one
that
I've
opened
up
and
it's
really
about.
We
get
a
lot
of
requests
to
make
changes
to
networking
and,
in
my
opinion,.
A
It
I'm
really
scared
of
it.
Actually,
it's
a
lot
of.
We
have
a
lot
of
conditionality
in
the
controller
now
we've
had
discussions
with
aws
as
well,
and
that
jay
popped
up
on
the
chord
about
possibly
using
the
aws
controllers
for
kubernetes.
So
what
we
would
do
in
this
is
break
up
that
aws
cluster
object
and
allow
you
to
construct
the
various
components
and
then
that
can
account
for
all
all
weird
and
wonderful
varieties
of
networking
in
aws
context,
the
infinite
permutations
of
thereof.
A
So
the
question
here
around
when
that
other
project
comes
online,
whether
it
supports
the
multi-tenancy
that
we
do
or
do
we
go
ahead
and
do
it
anyway.
How
much
do
we
implement
sort
of
the
ad
hoc
requests
that
come
in?
B
Andre
yeah,
so
I
think
I
think
this
is
a
desirable.
I
am
scared,
as
you
mentioned,
of
like
our
implementation
of
like
doing
this
in
the
controller,
and
things
like
that
so
like
I
do
think
that
the
feasibility
of
this
would
change
given
whether
or
not
we
use
the
ack
controllers.
B
B
A
A
B
If
we
should
kind
of
define
how
far
into
machine
pools,
because
with
eks
support,
there
are
a
couple
of
ways
to
approach
a
machine
pool
from
the
asg
like
unmanaged
side,
there's
also
like
the
managed
there
might
be
a
couple
machine
pool
providers
in
in
kappa,
depending
on
or
maybe
they're,
not
in
kappa
code
base.
But
you
know
they're.
It's
worth
calling
out
that
there
is
a
managed
node
group
idea
that
may
or
may
not
be
in
scope
for
the
for
the
milestone.
A
And
by
managed,
do
you
mean
in
the
service
of
eks.
A
Okay,
so
I've
just
added
managed
machine
pool,
so
eks
managed
machine
pool
for
eks,
but
unmanaged
machine
pool,
cube
adm
and
machine
profile
gate.
So
it's
the
one
that
in
progress
right
now,
this
sort
of
eks
eks
but
unmanaged
as
in
it's
an
asg
that
works
with
eks.
It's.
A
Okay,
and
how
desirable
do
you
think,
is
the
managed
eks
richard.
C
I
guess
from
people
that
use
like
eks
cuttle
to
provision
eks,
the
the
managed
nodes
is
quite
popular
at
the
moment
from
our
customers,
and
you
know
they
don't
want
to
look
after
the
actual
nodes.
So
it
would
be
quite
quite
desirable
to
quite
a
few
of
our
customers.
C
C
I
don't
think
it
would
be
wildly
different
to
be
honest,
because
there's
still
instances
they're
just
managed
in
a
few
other
bits
around
it.
Different
api
calls,
but.
A
Cool
thanks
and
andrew
how
how
important
is
far
gate.
B
A
Yeah
all
right
and
I'm
giving
it
a
low
feasibility
in
terms
of
like
I
don't
know
how
far
gate
would
map
onto
a
machine
for
to
be
honest,
no
idea
what
it
would
look
like
cool
next
one
is
they're
using
using
ack
if
it
arrives,
I
think,
unless
anyone
objects,
it's
very
desirable
and
probably
incredibly
feasible
as
well.
Well,
migration.
A
This
is
one
I
put
down
this
performance
improvements
and
rate
limiting.
I
mean
new
relics
very
keen
on
this.
You
were
hitting
into
your
cloud
trail
limits.
We've
definitely
seen
it
in
our
in-scale
testing.
A
Unless
everyone's
got
objections,
I
think
we
we
need
to
just
go
ahead
and
do
this
I
guess,
might
be
the
maybe
the
desirability
goes
down
if
you're,
using
eks
and
everything's
managed
by
you
guess,
but
it's
like
if
you're
not
it's
pretty
important
and
I'll
be
starting
work
on
that
soon
anyway,
andrew.
B
So,
where
it
gets
a
little
bit,
more
spicy
is
like,
if
you
use
the
vpc
cni,
which
is
a
thing
that
just
makes
a
bunch
of
calls
and
cappa
is
also
making
a
bunch
of
calls.
You
can
get
into
a
scenario
where,
like
kappa's
volume
in
a
self-managed
cluster
of
aws,
calls,
will
actually
slow
down
your
pods
abilities
to
get
ips
and
that's
true,
eks
or
not.
A
I'm
we'll
just,
I
think
that
it's
pretty
high
priority
then
spot
instant
support.
Do
you
know
how
much
demand
there
is
on
this
people
who
are
working
with
customers?
B
Hi
me
again,
so
there
is
a
degree
of
this
that
comes
with
the
unmanaged
node
group
support.
The
struct
that
we've
created
does
have
the
I
forget,
the
name
of
the
fields,
the
the
instance
distribution
fields
that
allow
you
to
say
like.
I
would
like
to
use
a
spot
pool
for
some
portion
of
my
asg,
so
that
is
in
there.
What
is
not
in
there
is
saying
I
would
like
an
individual
machine
object
to
be
provisioned
via
like
a
spot
pool,
or
something
like
that.
B
A
Fair
enough
yeah,
I
don't
I've
got,
I
think,
you're
more
likely
to
do
it
for
the
machine
ball
personally,
but.
A
Don't
know
how
desirable
it
is.
It
looks
like
it's.
It's
happening,
gpu
support,
elastic
gpu
again,
I
have
no
idea
like
do
people
want
this.
It
seems
it's
either.
There's
two
things
we
need
to
consider
here.
A
The
support
in
image
builder,
particularly
with
nvidia
drivers
and
I'm
the
licensing
thing
stuff,
will
be
interesting
around
that
anyway,
and
what
whether
we
support
an
api.
I
think
we
get
gpu
support
automatically
through
the
instance
type,
but
we
don't
support
the
elastic
gpu
settings
right
now.
D
And-
and
that's
gonna
probably
tie
into
core
cluster
api
so
that
we
can
bubble
things
up
correctly.
C
A
C
Yeah
we
so
it
was
requested
for
us
for
eks
cuttle
to
to
support
enabling
gpus
like
in
doing
those
install
steps.
We
have
a
profile
where
you
can
enable
it
as
well,
and
that
was
that
was
from
direct
customer
requests,
but
there
not
that
many
of
them.
C
A
Yeah,
so
I
see
a
lot
of
demand
for
getting
just
like
an
image
that
actually
works.
I
think
if
anyone
who's
looked
at
the
nvidia
stuff,
documentation
pretty
geared
towards
docker,
not
container
d
and
it's
pretty
horrible.
A
D
I
haven't
dug
deep
enough
to
really
know
just
because
there's
hints
of
gpu
stuff,
but
I
I
didn't
see,
I
didn't
dig
in
deep
enough
to
see
how
it
ties
in
so
I
think
one
is
about
right.
A
Yeah
yeah
and
I've
no
idea
about
the
image.
I
think
it
might
be
pretty
horrible
for
container
d,
so
also
this
might
be
difficult.
Yeah
window
support,
that's
another
one.
So
I
think
this
is
mostly
an
image
builder
question,
because
I
think
you
can
pretty
much
one
cloud
in
it
at
the
moment
and
then
anything
else
I
know
azure
is
going
to
be
working
on
a
bootstrap
provider
that
does
the
things
through.
I
know
whatever
it
is:
powershell
design
state
configuration
system,
the
unattended
stuff,
I
guess,
is
what
they're
doing.
D
On
the
plus
side,
eks
support
gives
us
an
easier
road
to
support
this
through
eks
machine
pools.
If
we
go
to
manage
machine
pool
route.
A
One
less
than
once
eks,
I
think
we
just
had
on
the
roadmap.
I
mean
we're
doing
it
so
fun,
bootstrap
failure
detection.
So
I
did
start
a
dock
in
this
a
couple
of
months
ago.
A
We're
trying
to
figure
out
whether
or
not
there
was
some
generic
cluster
api
and
why
one
wide
way
to
do
this,
I
think
the
conclusion
was
not
not
really
given
that
we've
done
some
hacks.
I
guess
in
cluster
api,
mainly
around
retrying,
cube
adm,
the
in
need,
for
necessity,
to
have
to
do
this.
It
should
have
gone
down
but
like
how
I
think,
important
for
the
new
relic
people.
If
you,
because
you've
been
bringing
up
clusters
a
lot,
do
you
still
see
the
need
to
do?
A
We
need
to
build
something
around
aws
session
manager,
in
particular
andrew.
B
So
there
are
kind
of
a
couple,
different,
interesting
things
here.
So
on
the
machine
deployment
side,
there
is
a
like
machine.
Health
checks
actually
get
rid
of
most
of
our
problems
in
that,
like
oh
there's,
no
node
like
let's
just
keep
retrying
and
then,
like
you,
know
I'll,
just
notify
myself
in
some
way.
B
If
I
need
to
go
start
trying
to
shell
onto
a
host
and
figure
out,
what's
going
on
with
machine
pools,
it's
a
little
bit
more
interesting
because
there
is
no
machine
object
and
because
the
number
of
replicas
is
optional
on
a
machine
pool
object
like
it's
really
hard
to
sometimes
detect
that
you're,
not
in
the
state
that
you
think
you're
in
so
there's
like
it
gets
a
lot
more
complicated
in
the
machine
pool
case
I
think,
but
it
it
would
be
nice
on
the
machine
deployment
side
to
know,
like
you
know,
hey
max
retries
three.
B
A
Okay,
maybe
two
I've,
no
idea
around
feasibility
need
to
look
at
it
again.
A
All
right
thanks
for
that,
I
I
haven't
even
considered
machine
pools
at
all,
so
that
would
be
even
completely
different
once
again,
so
that's
good
feedback
I'll
start
having
a
look
again.
What
to
do?
If
there's
anything
we
can
do
next.
One
is
machine
load
balancers,
so
I'm
not
sure
why
this
is
an
aws
roadmap
per
se.
A
This
came
out
of
stuff.
That
was
happening
particularly
with
the
on-premise
environments,
where
there
isn't
necessarily
a
public
cloud
load
balancer
construct,
and
you
need
one
for
your
api
server,
regardless
of
whatever
ingress
or
service
type
load.
Balancer
you
actually
have
in
the
cluster.
A
Is
there
a
reason
not
to
use
elbs
and
have
a
machine
based
api
server
load
balancer
in
the
aws
world?
I'm
not
sure.
D
I
think
we
could
leverage
kind
of
you
know
this
concept,
and
it
would
also
give
us
the
ability
to
support
more
than
just
elb
classic,
so
it
would
give
us
a
way
to
potentially
swap
in
an
nlb
or
if
folks,
wanted
to
spin
up
kind
of
a
separate
load
balancer
for
ingress
services.
There
would
be
a
facility
to
do
that
as
well,
using
the
same
type
of
tooling,
that
would
manage
kind
of
a
control,
plane,
load,
balancer.
A
A
D
A
Anyone
objects
to
those
numbers.
You
should
be
able
to
be
able
to
edit
this
by
the
way
it's
on
this
chair
to
the
same,
a
cluster
lifecycle,
main
list,
I
think,
and
finally,
from
those
roadmap
items
it's
to
use
eventbridge,
for
instance,
notifications.
So
things
like
when
the
machine
turns
off
stuff,
we
can
kappa,
can
get
notified
automatically
through
some
mixture
of
sns
topics,
sqs
cues.
A
A
A
Still
got
quite
a
lot
of
nights,
but
I
think
our
big
ticket
items
are
there
for
doing
multi-tenancy
delete
that
because
we
broke
that
up
into
the
different
the
managed
machine
pools
or
the
varieties
thereof,
not
so
much.
The
far
gate
version
getting
those
performance
improvements
rate
limiting
done
eks
in
general,
just
getting
the
support
for
that.
The
cloud
watch
event
bridge
notifications.
A
Those
are
our
highest
priority
items
then
I
think
for
our
next
release
and
then
slightly
less
than
that.
Aws
controls
combinations
a
bit
wise
network
topology.
If
dependent
on
that
ack-
and
I
guess,
if
there's
time
we
can
do
gpu
support
and
draw
stat
and
bootstrap
failure
detection.
A
So
that
makes
sense
to
everyone.
Is
there
anything?
That's
the
major
top
things
items
that
people
think
are
missing
or
they
would
like
to
see.
A
Yeah,
so
what
I've
done
is
this
project's
night?
So
it's
on
the
kubernetes
six,
but
we
can
create
projects
within
the
sixth
organization
and
we
can't
necessarily
create
a
project
on
the
project
itself.
So
what
I've
just
been
doing
is
bucketing
the
all
of
the
open
issues
which
I
think
there
was
about
94
into
various
categories,
and
then
we
go
go
through.
A
Maybe
not
sure
now
is
best
time
to
go
through
all
of
these
or
if
maybe
we
should
just
sort
of
figure
out
our
timeline
space,
which,
if
we've
got
opinions
in
which
order
we
should
we
sort
out
the
timelines
first,
the
milestones.
D
A
Cool,
so
if
we
look
back
at
this
on,
why,
let's
see
if
we
can.
A
A
A
A
Oops,
we
could
go
six
months,
not
six
months,
three
months
from
september,
so
well
december.
I
guess
for
0.60.
Is
that
optimistic
or
not
enough
things
in
there
and
can
we
release
it
sooner?
How
do
people
feel
on
that
andrew.
B
Yeah,
so
from
from
the
new
relic
perspective
like
it
would
be
like
optimal
to
be
able
to
deploy
the
eks
objects
via
published
yaml,
whether
it's
an
alpha
release
or
whatever,
like
at
the
end
of
august,
for
example,
as
far
as
getting
the
machine
pool
portion
like
that,
that's
totally
like
later,
I
I
certainly
wouldn't
see
that
work
going
into
going
as
far
as
december,
like
in
in
our
case,
we'd,
probably
like
to
have
them
in
the
mid-october
time
frame,
maybe
even
early
october,
and
I'm
guessing
based
on
calendars.
B
I
saw
a
while
ago,
but
yeah
so
like
from
our
perspective,
like
we're
basically
committed
to
making
eks
fully
functioning
in
its
basic
form
by
the
end
of
this
month
and
having
machine
pools
functioning
for
eks
by
the
end
of
the
next
month.
A
Okay,
maybe
on
that
basis
there
we
maybe
we
make
we
zero
six,
zero,
all
singing
dancing,
initial
eks
support
and
then
machine
pools
in
zero
six
x.
Well,
zero.
You
know
whatever
that
is
zero,
six
one,
probably
for
end
of
september,
so
end
of
end
of
august,
zero,
six,
zero,
eks
initial
machine
pool
zero
six
one
end
of
september
september.
Does
that
sound
about
right.
A
Yeah
and
personally,
I
will
be
doing
performance
improvements,
stuff
things
like
that
for
probably
the
zero
six
zero
timeline
as
well,
probably
follow
up
with
the
event
bridge
well
before
you
follow
up
with
the
multi-tenancy
stuff,
so
yeah
I'll,
put
event
bridge
and
performance
improvements,
zero,
six,
zero
and
then
multi-tenancy
for
zero
six
one
and
with
the
machine
pools
as
well.
Does
that
sound
about
right.
A
I'm
okay
with
not
immediately
picking
times
with
15
minutes
for
the
rest
of
these,
and
we
can.
Maybe
people
can
give
a
thought
about
these.
We
can
come
back
to
these
in
the
next
meeting.
I
think
we've
got
a
clear.
We've
got
a
very
clear
idea
what
we
want
to
get
done
in
the
next
two
months
and
we
can
take
the
next
meeting
when
there's
more
people
around.
Why
didn't
you?
Opinions
go
through
some
of
these
other
items
that
we
know
less
around
the
desirability
thereof:
around
gpus,
bootstrap
value,
detection,
spottings,
etc.
A
D
D
August
31st
and
october
1st,
and
then
there's
not
really
too
much
of
a
target
for
v1
alpha
4.
Yet,
but
looking
maybe
you
know,
q4
q1
calendar
year-wise.
I
think
that'll
probably
become
a
little
bit
more
clear
as
we
get
further
along
with
the
cappy
planning.
A
All
right,
then,
I
think
maybe
on
that
basis
we
I'm
happy
to
call
it
for
this
current
bit
and
then
we
can
sort
of
give
some
thought
to
it.
A
line
see
what
happens
with
cappy.
A
A
Cool
all
right,
then,
with
that
thanks,
everyone
for
their
time,
see
you
in
the
next
regular
meeting.
I
guess
I
will
get
this
recording,
save
and
sort
out
with
jason,
where
how
do
I
get
get
it
to
somewhere.
D
Yeah
I'll
be
happy
to
help
you
out
with
it
thanks
everybody
thanks
ninja.