►
From YouTube: Kubernetes SIG Network 2019-02-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
Data
plane
we
aren't
programming
rules
for
ICMP
packet
transpositions,
so
in
essence
for
an
end
user.
This
is
visible
by
any
existing
tools
that
rely
on
ICMP
like
paying
or
traceroute,
not
working
for
outbound
communication
paths
and
I
wanted
to
hear
more
about
what
mechanisms
exist
in
kubernetes
data
rely
on
ICMP
messaging
that
we
could
be
breaking,
and
if
there
are
any
tests
associated
with
that
and
just
kind
of
hear
what
our
requirements
are
out.
That.
D
So
from
the
thread,
the
the
one
that
we
used,
that
we
know
for
sure
is
we
use
ICMP,
reject
or
services
that
have
no
endpoints,
which
is
pretty
important
between
for
pod
to
pod
traffic.
You
need
to
get
a
reject
instead
of
a
time-out.
I
have
a
PR
that
I'm
cooking
to
add
the
same
capability
for
external
load
balancers,
because
apparently
we
just
missed
that
case
or
didn't
finish.
D
D
Mean
I,
it
seems
unfortunate,
but
since
that
we
didn't
have
it
before
I
it's
hard
for
me
to
say
that
it's
critical
I
will
be
adding
an
e
to
e
test
to
cover
it.
So
I'll
have
to
make
sure
that
the
EDD
test
only
runs
on
platforms
before
that,
but
other
than
that
I
mean
it
doesn't
seem
like
an
egregious
thing
to
me.
I
wanted
to
see
if
other
people
have
opinions,
music,
hope,
owned.
B
D
B
E
F
I
feel,
like
anything,
you
can't
restrict
with
net
policy.
We
don't
actually
guarantee
wouldn't
work
like
in
theory,
if
we
say
you're
supposed
to
support
ICMP.
That
means
that
if
pods
can
flood
each
other
with
pings,
even
when
they're
not
supposed
to
be
able
to
talk
and
stuff
like
that,
Oh
actually.
G
G
G
But
I
guess
my
point
is
that
I
was
not
expecting
protocol
restrictions
now
to
be
technical.
Icmp
is
you
know
in
the
actual
protocol
stack
of
peer
of
ipv4,
but
logically
it's
you
know
joined
at
the
hip,
so
you
know
along
the
lines
of
thought,
expecting
protocol
restrictions.
It
seems
odd
to
restrict
ICMP.
D
E
D
That's
a
good
question
like
is
there
I
mean
this
is
sort
of
new
tech
in
Windows
world
right,
so
is
there
an
established
expectation
about
whether
one
of
these
contained
processes
should
be
able
to
ping
out
like
an
argument
we
made
on
the
other
window?
Stuff
is
well
Windows
users
don't
expect
that
to
work.
So
it's
not
a
regression
for
them.
Is
that
this
case.
B
So
I
would
say
that
we
try
to
document
wherever
possible
and
like
anyhow
to
guys
that
this
is
something
that
wouldn't
work.
So
from
that
perspective
from
the
beginning,
we're
setting
the
expectation
that
that
shouldn't
work,
but
in
practice
I
still
see
quite
a
few
users
have
been
trying
it.
So
they
already.
D
D
B
G
I
G
D
D
You
know:
cuz
endo,
so
like
load
balancers,
our
node
ports,
where
you're
responding
to
an
external
IP
address
across
your
pod
Network
versus
the
external
networks,
and
that
seems
like
a
relatively
minor
thing
and
you're
right
Mike.
It's
a
it's
about
user
experience
and
not
correctness
right,
it'll
still
timeout.
It
just
won't
time
out
fast.
D
D
D
B
D
D
J
B
K
K
D
Mike
you
you've
cut
to
the
the
core
of
it
whether
you
recognize
that
or
not
they.
It's
important
to
understand
a
little
bit
of
the
history
of
the
architecture
that
exists
today.
So
when
we
were
first
building
this
and
we
had
support
for
just
one
cloud
provider,
we
said:
how
do
we
get
services?
How
do
we
get
load
balancers
into
services?
The
only
way
we
could
do
it
was
to
send
traffic
to
some
number
of
nodes
in
the
cluster
and
then
use
IP
tables
or
user
space
on
those
nodes
to
bounce
to
the
pod
endpoints.
D
D
I
don't
want
to
send
all
the
nodes
as
Lissa
I
want
to
apply
a
an
arbitrary
filter
here
and
the
way
we
went
about
that
was
adding
a
label
to
know
it.
So
we
say:
okay.
If
this
node
is
labeled
with
this
magical
word,
then
we
will
not
add
it
to
this
list.
And
so
the
question
here
is:
do
we
move
that
forward
from
alpha?
Or
do
we
find
a
better
answer?
Andrew
did
I
capture
their
question
right,
yep,.
K
That's
good
so
yeah
I'm
happy
to
kind
of
punt
this
into
1:15.
If
we
want
to
kind
of
noodle
on
it
for
a
bit
longer
or
if
we
can
agree
on
at
least
leaving
the
current
functionality,
as
is
renaming
the
label
to
something
that
indicates
that
be
stable,
at
least
for
v1
services
and
then
start
talking
about
longer
term
solutions
later
so
is.
K
So
the
so
the
biggest
thing
is
that,
in
addition
to
the
node
exclusion
logic,
we
also
don't
masters
to
low
bouncers,
so
there's
no
way
for
users
to
indicate
that
they
want
their
masters
in
the
in
their
low
bouncers.
So,
like
the
ideal
end
goal
is
that
we
add
all
nodes,
including
masters
and
me,
and
then
we
rely
on
that
exclude
function,
but
with
the
exclude
function
being
alpha
it's,
it
would
be
a
bit
odd
for
us
to
add
masters
back
into
little
bouncers
and
then
force
users
to
enable
this
alpha
feature
as
well.
D
There's
there's
also
the
question
of
so
looking
at
that
filter
pass
that
we
do
in
the
cloud
controller.
I
look
at
this
very
recently.
It
is.
We
don't
take
demasters
nodes
that
are
labeled
as
role
master.
We
don't
take
nodes
that
are
marked
unschedulable,
which
for
most
masters,
is
also
true,
and
we
don't
take
nodes
that
are
labeled
with
this
label.
If,
if
the
Alpha
Gate
is
enabled
so
for
those
users
who
want
to
load
balance
through
their
masters,
they
will
also
have
to
mark
their
master,
is
schedulable
or
we'll
have
to
change
that
logic.
D
E
E
D
K
D
So
I
I
know
this,
because
I
was
looking
at
whether
we,
what
I
don't
want
to
it
I
rather
I,
don't
want
to
filter
unschedulable
nodes
out
I
actually
would
rather
change
the
way
health
checks
work
to
indicate
that
an
unscheduled
node
is
not
ready,
as
opposed
to
not
being
part
of
the
list.
The
reason
for
that
is
to
do
like
cleaner
connection
draining
and
so
I
took
a
look
at
this
code.
In
fact,
I
have
a
PR
right
now,
which
is
what's
informing
my
concern
about
this.
This
feature.
D
Would
we
choose
a
generic
annotation
which,
like
having
having
a
kubernetes
that
io
prefix
on
the
annotation
sort
of
imply,
is
that
everybody
supports
it,
and
if
we
had
a
per
cloud
provider
prefix
there,
then
it
becomes
a
clogged
writer,
specific
solution
which
I
actually
prefer,
but
I
don't
know
who
is
actually
using
this
today,
like
the
fact
that
it's
Alif
I
guess
means
there's
probably
relatively
few
people
using
it.
So
if
we're
going
to
change
it,
now
is
the
time
to
change
it.
If
we
make
this
go
beta,
then
we're
just
gonna
would.
I
D
Yes,
that's
one
of
the
options
is
just
sort
of
document
it
and
say
this
label
exists,
some
providers
will
respect
it
and
some
well
not,
and
that
sort
of
sucks
right
from
an
end-user
point
of
view.
You
don't
really
know
what
you're
getting,
which
is
why
I
would
lean
towards
I
would
rather
end
of
life.
D
This
label
and
crate
push
it
back
on
the
club
running,
say:
if
you
want
to
support
this,
then
you
have
to
add
a
cloud
right
or
specific
label
and
to
find
what
that
means,
and
it
sort
of
means
that
when
you
move
between
cloud
providers,
you'd
have
to
change
the
way.
You
do
your
logic
but
like
at
the
infrastructure
level,
you
sort
of
blog.
Well,
you
have
some
program,
that's
saying
which
nodes
you
want
to
filter
in
which
nodes
you
don't
it's
kind
of
dependent
on
provider
itself.
D
D
K
D
D
Anybody
who's
running
a
cluster
where
the
master
is
part
of
the
part
of
the
cluster
right.
We
have.
We
have
a
general
problem,
I
think
with
masters,
and
that
we
have
conflated
the
idea
of
a
control
plane
node
with
a
node.
The
fact
that
masters
register
as
a
node
in
some
clusters
and
not
as
a
node
in
other
clusters,
is
confusing
to
people
and
again,
if
I
had
my
time,
machine
working
I
would
change
that.
D
G
Yeah
I
find
that
odd,
actually
I.
You
know
before
I
got
involved
with
kubernetes.
You
know
everywhere
else:
I
went
a
node
met
machine
regardless
whether
grows
control
or
worker
I,
find
it
totally
odd
that
that
you
know
we
here,
we
use
node
is
in
some
sense
contrast
to
control
and
only
for
worker.
It
makes
sense
to
me
to
have
flexibility.
It
makes
sense
to
me
to
we've
got
the
self-hosting
right.
We've
really
got
in
some
sense,
three
levels
of
stuff.
All
right.
G
G
You
know
it's
since
we
do
so
much
self
hosting
right.
You
want
part
of
the
control
plane
to
be
hosted
by
nodes
worker
nodes.
You
want
to
be
able
to
put
make
that
you'd,
like
also
be
will
have
them
on
distinguished
control
nodes
and
have
reserved
some
nodes
for
running
self-hosted
control,
some
for
pure
workload
right.
D
J
D
G
So
I
get
back
up
a
step,
so
don't
we
already
have
distinct
labels
for
distinct
roles.
D
We
do
not
have
a
single
label
that
indicates
what
it
means
to
be
a
to
what
a
role
is
we
have
a
or
maybe
it's,
maybe
it's,
maybe
it's
implemented
as
a
label
of
like
kubernetes,
io,
/
roll
master.
I
forget
exactly
the
detail
of
that.
But
yes,
we
have
a,
we
have
a
semi-formal
you're
a
master
or
you
are
not
a
master.
Okay.
G
Yeah,
that's
why
I
object
to
right,
I
thought,
though,
that
we
were
actually
doing
it
with
distinct
labels
for
each
role,
so
that
makes
it
orthogonal.
We
could
make
an
orthogonal
I
mean
we
could
use
it.
Orthogonal
ii-if,
it's
syntactically,
it's
already
orthogonal.
If
we've
got
a
distinct
label
for
each
role,
I
would.
E
D
G
Thing
to
say
is:
there's
a
concept
of
rules
in
the
sense
that
there
are
distinct
roles
that
you
can
combine.
As
you
want
there's
a
role
of
listing
work,
though
there's
a
role
of
hosting
control,
there's
a
role
of
not
hosting
at
all,
just
being
control,
all
right
and
each
of
those
rules
can
independently
apply
or
not
apply
to
a
given
mode.
Johnny.
E
We're
gonna
change
the
behavior
of
this,
like,
if
we
say,
pull
out
the
check
for
on
schedule
and
the
check
for
master,
then
all
of
a
sudden
traffic's
from
start
going
to
these
nodes
that
I
didn't
know
to
so
I
would
think
that,
during
the
upgrade
probably
need
to
real
able
the
nodes.
Does
that
something?
The.
G
K
So
yeah
so
like
the
DDoS
and
master
I
think
is
a
legitimate
concern
for
us
to
like
kind
of
do
this
in
a
future
release.
But
yeah
going
to
what
Mike
said
my
thought
process
for
that
was
yeah
like
add
a
filter
nodes
method
that
does
that
preserves
the
exact
same
logic
for
nodes
today.
But
then
we
would
completely
get
rid
of
the
service,
node
exclusion
logic
and
and
rely
on
the
existing
providers
to
to
follow
that
same
logic,
and
then
that
at
least
lets
new
providers
not
be
stuff.
D
It's
almost
like
you
read
my
github
I,
actually
called
the
method,
filter
node
the
question
I
have
that
I
don't
know.
The
answer
to,
though,
is
for
cloud
providers
that
are
not
in
tree.
This
represents
a
behavioral
change
of
the
main
controllers
and
I.
Don't
know
how
to
do
outreach
to
the
set
of
people
who
are
doing
their
own
out
of
tree
top
runners.
I
guess
do
is
one
of
those
so
like
what
would
happen
if
I
silently
started
sending
load,
balance
or
traffic
through
your
master
names
I.
Think.
G
Maybe
we
thought
we
agreed,
we
didn't
so
what
I
was
suggesting
was
that
we
take
the
cloud
provider
interface
and
add
a
new
method.
That
is
not
a
pre
pass,
but
is
an
alternative
to
the
method.
That's
currently
being
vocht
with
the
small
list.
We
had
an
alternative
method
that
gets
being
invoked
with
the
bigger
list
and
I
guess
we'd.
Actually,
technically
we
do
this
with
a
distinct
golang
interface.
D
D
G
D
G
D
Right
they
would
see,
doesn't
implement
clump
writer
anymore
because
it
doesn't
let
the
filter
nodes
met.
Oh
look
at
the
release.
Notes.
Oh
right,
I
have
to
implant
filter
nodes.
Here's
the
boilerplate
filter
nodes
from
the
old
implementation,
I
can
just
cut
and
paste
that
or
I
can
actually
think
right.
D
Right
but
we,
but
we
need
to
change
the
signature
or
something
in
a
way
that
would
cause
a
compiler
breakage,
exactly
change
the
name
or
the
signature
or
both
okay.
So
with
the
PR
that
I
started
on
does
his
has
a
non
exported
method.
But
if
we
made
that
an
exported
method
of
the
interface,
then
it
would
break,
and
that
would
force
everybody
to
pay
attention
that
might
not
be
too
bad.
So
we
could
do
that.
G
H
D
K
J
D
I,
don't
know
who
doing
what
with
them
right
now,
I
know:
we've
talked
about
whether
it
would
be
possible
within
Google
I.
Think
eBay
does
their
own
thing
with
it
and
they've
employed
their
own
cloud
provider
like
they're
they're,
light
years
ahead
of
everybody
else
in
terms
of
adopting
crazy
out
of
tree
stuff.
So
you
know
I,
don't
know
that
it
matters
to
them.
A
K
K
K
D
Can
do
it
I
mean
any
change
that
changes.
The
signature
of
that
interface
will
cause
the
same
net
result.
So
you
know
we
can
look
at
options
for
that,
whether
that
was
like
creating
insure
load
balance
or
to
like
Mike
suggested
or
adding
a
filter
nodes
that
we
call
from
the
course
service
provider,
service,
controller
loop
or
doing
something
else
we
can.
We
can
argue
that
I
think
that's
the
less
consequential
decision
scene.
I
think
the
real
decision
is
what
to
do
with
the
annotation.
D
G
D
K
Okay,
so
I
will
kind
of
go
into
this,
assuming
that
most
likely
we'll
have
to
do
this
in
115,
but
I'll
still
continue
in
the
banlist
and
see.
If,
if
me,
if
there
are
volunteers
to
pick
us
up-
and
we
can
get
this
done
in
114
great
but
I'm
going
assume
going
forward-
that
it's
gonna
be
a
115
thing.
Okay,.
H
Yeah,
this
was
kind
of
some
leftover
stuff
that
I've
been
talking
about
with
Carol
at
cube
con.
He
had
brought
up
some
cases
where,
if
you're,
using
like
DP
DK
app
inside
a
pod,
what
happens
there
is
that
there
is
actually
a
socket
that
is
created
and
then
has
to
get
mounted
into
the
pod,
and
then
the
app
inside
the
pod
uses
that
to
do
all
of
its
networking
stuff.
But
the
problem
there
is
that
you
know
well.
This
is
not
anything
I
mean
to
be
clear:
it's
not
a
multi
interface
pod.
H
It's
not
anything
like
that.
It's
just
a
single
pod
with
what
would
be
a
single
IP
address
and
all
that,
but
because
the
networking
is
actually
provided
by
that
socket.
Essentially,
that
is
mounted
into
the
pod
that
mount
happens
at
a
different
time
than
the
sandbox
setup
for
networking.
As
far
as
I
understood
it,
and
so
it's
kind
of
a
question
around
well
and
also
the
mount
stuff
in
Dockers
per
container
not
force
in
box.
H
So
it
was
kind
of
a
question
of
there
seemed
to
be
no
good
way
to
handle
that
case,
and
it
seemed
like.
Maybe
we
would
need
something
that
would
eventually
be
a
more
consistent
or
combined
a
way
of
setting
up
pod
resources
that
was
not
just
Network
on
the
one
side
and
everything
else
on
the
other
side,
so
I
kind
of
wanted
to
throw
that
out
there
and
maybe
just
get
people
thinking
about
it.
A
little
bit
I
should
probably
send
a
mail
with
a
little
bit
more
detail
to
sig
network,
but
you
know
Tim.
D
H
I
mean
the
issue:
is
that
well
I
mean
again
two
issues,
even
if
there
was
a
CNI
driver
that
did
this
stuff.
It's
not
able
to
do
mounting
so
you'd
have
to
have
external
coordination
between
that
CNI
driver
to
talk
to
something
else
that
would
actually
handle
the
mounting,
and
that
could
be
like
device
plug-in,
because
it
can
do
some
of
those
things
or
at
least
update.
Those
I
think.
I
H
G
D
G
Yeah
good
just
just
background
I'm,
not
real
familiar
with
the
the
DPD
case,
though,
and
yes
OOP.
If
the
answer
is
too
big,
you
know
feel
free
to
defer
it,
but
just
I'm
a
little
bit
confused
because
normally
in
Linux,
when
you
launch
a
process,
you
know
it
doesn't
launch
with
any
FDS
open
except
for
city
instead
out.
H
Stood
error
right,
so
what
happens?
Is
the
application
has
to
have
a
path
to
that
socket?
So
there
is
some
configuration
set
up,
but
you
could
map
that
path
from
the
host
from
some
specific
directory
in
which
it's
created
dynamically
by
something
to
a
known
path
inside
the
pod.
So
the
app
running
inside
the
pod
would
always
see,
like
you
know,
var,
run
network
dot,
sock
or
something
like
that.
H
G
H
G
H
H
I
D
E
I
just
wanted
to
say
I
guess:
I
have
a
lot
of
questions
about
how
that
works
and
how
that,
like,
it
definitely
seems
like
it's
sugar
dealing
less
but
yeah.
Well,
it
seems
to
me
if
it
is
you're
opening
the
socket
and
you're
talking
through
something
on
that
host
you're
on
a
completely
different
network.
You
probably
have
a
completely
different
IP
on
that
network.
It
seems
you
actually
completely
orthogonal
to
the
pod
networking
aspect.
It's
just
I
know
you
do
a
volume
mount
and
it's
just
a
special
thing,
because
you
have
to
know
a.
E
H
H
You
know
bridges
involved,
of
course,
but
you
know
the
packets
would
simply
go
out
over
the
regular
network
in
the
same
way
that
if
you
had
a
bridge
and
all
the
pods
connected
to
the
bridge
with
the
East
that
those
packets
would
then
go
out
over
the
same
network
between
no
it's
just
I
mean.
Actually
it
would
probably
be
about
the
same
thing
for
those
BK.
D
H
E
L
H
D
It
requires
looser
application
level
support
so
in
a
cluster
which
has
sake
or
DNS
running
and
who
knows,
whatever
you
know,
random
stuff
from
from
committees,
github
it
sinks
and
abs
or
user
applications,
I
Cassandra,
I'm
nginx.
Those
things
are
not
DP
DK
enabled
those
things
run
on
a
pond
network.
A
space
of
you
know
playing
of
the
existence
that
they
can
reach
each
other
on
the
DPD
case.
Stuff
would
have
to
plug
into
that
same
plane.
Just
through
I
see.
You
know.
J
D
We
don't
know
that
we
can
actually
mandate
that
right
now,
yeah,
but
somewhere
along
the
way.
You
have
to
say
this
Apple,
this
application
needs
D
PDK.
Somebody
please
make
the
DVD
case
stuff
available
to
this
pub,
which
could
be
like
an
annotation
a
label
or
something
that
a
that
a
admission
controller
uses
to
mutate.
The
pod
on
the
way
through,
like
sto,
does
right.
You
take
the
pod
with
all
the
volume
mounts
that
you
want
to
bring
in
I.
Don't
know
how
you
do
the
hosts,
handshake,
I,
don't
know
how
do
you
like
it?
D
The
name
so
we'd
have
to
think
about
that,
and
then
we'd
have
to
think
about
the
life
cycle
stuff
with
mounts
and
networking.
If
it's
not
done
it
in
a
mission,
controller
and
you'd
have
to
think
about
how
it
intersects,
with
networking
and
I,
know
that
if
Edie
were
here,
he
would
be
screaming
NSM
from
the
top
of
his
lungs
right,
because
this
sort
of
feels
like
exactly
what
NSN
was
about
is
a
way
for
pods
to
say:
I
want
access
to
this
network
service
that
committees,
that
cell
might
not
know
yeah.
H
I
mean
that
is
true,
but
that's
kind
of
like
all
the
stuff
that
you
just
talked
about
like
how
do
we
actually
plumb
or
how
do
we
actually
allocate
the
resources
on
the
host?
The
second
part
of
it,
which
I'm
trying
to
talk
a
little
bit
more
about,
or
at
least
seed
in
people's
minds,
is
once
all
that
stuff
is
done,
which
may
well
be
done
by
NSM,
or
something
like
that
in
the
future.
How
do
we
actually
deal
with
the
pod
side
of
the
network
and
plumbing
that
little
bit
into
the
pod
I
mean.
D
I,
imagine
that
something
like
NSM
would
be
able
to
receive
that
socket
over
a
socket
and
write
it
to
file,
and
then
you
don't
need
to
mount
it
in
and
it
you
know,
there's
there's
a
way
to
do.
Fd
passing
and
that's
what
and
Sam
is
trying
to
capture.
So
you
don't
need
to
literally
do
it
as
a
volume.
You
can
do
it
as
a
descriptor.
D
G
D
D
There
are
a
lot
of
open
issues
that
appear
to
be
networking
related
issues,
I
every
now
and
then
I
try
to
triage
them,
but
I
never
make
it
more
than
you
know,
two
or
three
pages
into
the
triage
before
I
run
out
of
steam,
I'm
proposing
that
maybe
we
take
the
last
twenty
or
thirty
minutes
of
the
next.
However
many
meetings
it
takes
to
open
up
that
bug
list
and
just
chew
through
as
many
as
we
can
either
assigning
two
people
or
closing
as
duplicate
or
whatever
we're
gonna
do
with
them.
D
H
D
F
We
should
make
sure
there
are
I
P
BS
people
in
the
meeting.
I,
don't
actually
know
what
that
is,
but
I
don't
see
any
names.
I
recognize
right.
Now
it's
being
ipbs
people,
but
I
know
there
are
always
IP
vs
funds
being
reported
yep.
We
can
try
to
get
them
and
I
ignore
them,
because
I
have
no
idea
about
itvs.
Well,.
D
D
I
have
no
idea
whether
they're
duplicates
or
whether
they're
actually
bugs
or
they're
I,
don't
know
how
to
use
it
sort
of
help
reports
there's
a
mix
of
everything
in
there
and
I'm
sure
I'm,
not
even
finding
all
the
right
words
like
we've
got
how
many,
how
many
thousands
of
open
issues
we
have
against
KK
right
now,
mm
okay,
she's,
like
my
guess,
is
that
probably
10%
of
those
or
maybe
15
are
networking
related.
If
we
may.
D
So
I
would
guess
that
we
are
we've
caught
about.
Half
of
them
will
be
my
guess,
so
I'm
willing
to
put
the
work
in
to
try
to
go,
find
all
those
bugs
if
we
as
a
group
can
commit
to
actually
chipping
away
at
this
triage
problem.
We
don't
need
to
like
spend
six
hours
doing
it.
We
just
need
to
do
twenty
to
thirty
minutes
at
a
time
over
and
over
again
and
we'll
get
there.
D
H
A
I
D
D
D
Yeah,
but
if
anybody's
got
time
you
want
to
play
with
various
other
tools,
I
know
there's
a
bunch
of
github
apps
that
require
you
to
sign
into
them
to
do
extra
bug,
exploration
or
whatever,
or
if
somebody
wants
to
spend
some
time
tinkering
with
the
github
API
to
try
to
pull
all
those
issues
down
like
that's.
Basically
what
I
was
doing.
Awesome.
Okay,
cool
well
drop
me
a
note
in
a
couple
days
time
and
let
me
know
what
you
got:
okay,
roll
we're
right
on
time,
just.