►
From YouTube: SIG Network Weekly Meeting 20200806
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
All
right,
fantastic
we've
got
next
up:
aws
elb
forwards
traffic
to
deleted
pod,
and
somebody
suggested
sorry.
I
don't.
D
F
D
B
B
C
A
Assign
this
one
to
me,
this
is
an
easy
one
to
follow
up
on
yeah,
okay,
I'll
score.
Those
points.
E
Yeah
we
we
are
on
it,
you
can
assign
this
to
me
and
I'll
pass
it
off.
This
is
rob
scott.
B
H
G
B
B
J
B
Here
we
are
error,
signing
specta,
pod
cider
to
nodes,
oh
behavior,
during
an.
B
J
E
J
Why
don't
you
assign
it
to
me?
We've
dealt
with
something
similar
with
gcp
and
preemptables,
and
we
can
I'll
link
them
to
the
reference
stuff
cool.
All
right.
Thanks,
tim.
C
Yes,
please
assign
to
me
or
rob.
This
is
probably
part
of
the
whole
related
issue.
E
Oh
either
way
it'll
get
to
the
right
person,
sure
yeah
more
than
merrier.
B
E
B
Okay,
let
me
grab
the
I
actually
didn't
open
more
issues
than
this
because
I
didn't
realize
we
would
go
this
far,
so
I'm
just
grabbing
the
url
again.
B
J
J
Yeah
it
starred
in
my
mailbox.
I
just
haven't
gotten
to
it.
L
This
was
28
days
ago,
so
we
are
updating
the
r
back.
So
it's
our
back
issue,
okay,
yeah!
So
it's
getting
fixed
so.
B
All
right,
it
looks
like.
E
It
looks
like
I
am
due
to
follow
up
with
this
one.
Okay,
I
think.
E
Yeah,
okay,
I
I
have
followed
up
with
this
one
and
yeah
I
in
progress.
It
was
a
over
aggressive
time
out
in
the
test
and
I
think
I
could
increase
the
timeout
even
further.
The
the
reason
this
is
failing
is
garbage.
Collection
is
taking
too
long.
It's
not
even
really
an
endpoint
slice
thing.
It's
a
yeah,
so
I'll
follow
up
again.
Thank
you.
C
A
For
walking
us
through
that.
A
Cool,
I
see
we
are
still
getting
items
added
to
the
agenda
laura
you
were
next.
You
wanted
to
talk
about
sign
up
tables
based
proxy.
Is
that
like
a
presentation
like
slides
or
do
you
just
want
to
point
folks.
F
M
M
So,
first
of
all,
we
would
like
to
explain
a
little
bit
the
application
stack
that
is
used
in
this
project.
This
is
based
on
the
motivation
of
consolidate
energy
tables
not
only
for
firewalling
but
also
for
load
balancing
and
to
reduce
the
application
stack
needed
for
balancing
using
enough
tables
so
in
during
the
application
stack
that
we
are
using
right
now.
M
We
are
based
on
the
nf
filter
infrastructure,
both
kernel
and
easy
space
infrastructure,
and
we
are
just
going
to
need
tables
and
the
contract
tools
for
that
we
could
get
rid
of
all
the
infrastructure
needed
for
ip
tables
and
all
the
derivatives
of
that
include
not
only
for
kernel
but
also
in
user
space.
So
we
are
reducing
here
quite
gently.
The
infrastructure
needed
to
for
lot
balancing,
including
ipvs
by
other
hand,
on
the
other
hand,
nftlb.
This
is
a
diamond
event,
driving
diamond
in
user
space
protein
c.
M
M
The
next
layer
is
cube
nftv.
This
is
a
wrapper
written
in
goal,
and
this
is
a
work
in
progress
project
for
what
we
have
at
the
beginning,
design
it
with
just
one
pod,
one
container,
but
two
processes,
one
client
which
listens
or
the
kubernetes
events
and
a
diamond
which
is
instantiated
with
a
one
nft
diamond-
and
this
is
just
one
instance
per
per
given
in
this
node.
M
Of
course,
of
the
messenger
services,
this
is
a
job
about
the
architecture.
M
M
We
already
support
native
dual
stack
system
per
system
by
source
id
with
a
ttl
support
net
into
back
ends,
and
some
annotations
that
we
are.
We
are
extending
like
some
helpers
in
order
to
to
interact
with
some
application
layer
protocols,
some
advanced
lb,
algorithms
and
also
traffic
logging.
M
We
are
currently
working
on
dcr.
We
have
successfully
tested
in
kubernetes
infrastructure.
We
are
just
integrating
it
in
nftv.
In
order
to
automatically
generate,
creates
the
loopback
interface
for
for
the,
and
also
we
are
expecting
to
have
the
egress
hook
in
the
filter,
which
will
be
required
for
interconnectivity
inside
for
dcr
inside
onenote,
also
we're
working
on
supporting
the
world
policies.
We
expect
to
be
very
easy
because
we
have
something
very
similar
that
we
call
security
policies,
so
it
should
be
very
easy:
duration,
external
ips.
M
M
Some
next
steps
that
we
want
to
share
with.
You
are,
for
example,
that
we
are
open
to
some
design
discussion
of
this
project.
If
you
feel
that
this
is
a
promising
project
and
we
are
working
as
well,
the
option
to
support
some
advanced
features
that
we
could
really
notice
that
the
kubernetes
api
is
lacking.
For
example,
some
configurable
session
persistence,
some
virtual
service
connection
limits
to
avoid,
for
example,
the
number.
C
M
M
M
Rule
sets
and
also
some
examples
about
the
hooks
and
how
we
do
create
virtual
services
in
terms
of
what
kind
of
virtual
service
configuration
is
it's
going
to
really
need
it
like
this
nut
example
for
port
support
or
external
access
from
an
external
client
to
just
robots
inside
a
kubernetes
node
or
even
from
user
space.
We
have
also
some
cases
for
dsr,
so
we
can
show
here
the
uses
of
the
different
hooks
for
different
use
cases
and
just
to
to
show
how
how
internally
this
is.
M
This
is
working
or
how
we
are
we're
able
to
do
that,
and
that's
all
if
you
want
to
we
if
we
can
explain
something
else
in
more
deeply
just
we
are
open
to
discussion
of
this
design
and
that's
all
if
you
have
any
question.
L
I
have
a
question
so
how?
How
did
you
allow
network
policy
to
integrate
with
this?
Does
it
comes
with
the
newer
policy
implementation
or
like
basically,
what's
the
basic
handshake
between
neural
policy
and
enough
table
cube
proxy.
M
Well,
my
idea
is
to
use
our
security
policies,
which
is
mainly
a
blacklist
by
blacklist
and
whitelists,
and
that
are
able
to
be
applied
per
service
a
virtual
service.
So
we
can
introduce
the
the
events
of
kubernetes
to
create
these
objects,
to
generate
the
lists
and
also
to
be
applied
to
several
virtual
services.
L
C
M
Well,
we
we
haven't
tested
evpf,
but
some
some
benchmarking
that
we
have
been
seeing
on
several
web
pages.
It
doesn't
seem
to
be
very
much
performance
that
we
we
got
with
enough
tables
and.
M
Enough
tables
is
right
now
a
language
with
very
much
expressivity
that
ip
tables,
so
we
can.
We
can
consolidate
pretty
good
the
all
the
blood
balancing
requirements
that
we
need
so
far
and
even
if
it
doesn't,
we
are
open
to
to
integrate
it
quite
easily
and
quite
rapidly.
So.
M
Currently,
in
the
tables
we
we
think,
and
we
are
in
the
balancing
market
for
a
long
time.
L
You
so
I
have
another
question
so
since
this
is
implemented
using
nf
tables,
I
assume
the
the
container
interface
setup
like
there's
not
much
more
dependency
than
the
iptable
based
q
proxy.
L
M
Well,
the
we
are
using
the
same
ide,
but
we
cube
nftlv
is
independent
completely,
so
we
could.
We
can
remove
proxy
and
then
using
the
same
ipi.
We
can
generate
the
whole
connectivity
required
for
forward
balancing.
L
So
I
have
one
one
more
question,
sorry
yeah.
So
regarding
the
dsr,
so
you
showed
example
from
the
user
space
program.
So
does
it
come
from
the
same?
It
has
to
become
from
the
same
node
or
like.
Can
it
be
like
outside
of
the
cluster
like.
H
M
This
dsr
we
have
tested
very
successfully,
which
are
the
first
cases
pod
support.
M
L
I
see,
then,
how
did
you
carry
the
those
information
right
when
you
do
the
dna?
Where
do
you
stuck
in
the
necessary
information
for
the
server
side
to
respond
directly
to
the
client.
A
A
That
we
get
to
the
rest
of
the
topics
on
the
agenda,
I'm
halfway
through
the
meeting
now,
so
I
think,
let's
cut
off
questions
there
and.
M
A
A
Okay,
thank
you
for.
E
Thank
you
that
looks
really
cool
thanks
for
sharing.
This
is
a
just
a
really
quick
presentation.
I
wanted
to
introduce
rick
chen,
who
has
been
interning
with
us
this
summer
and
he's
been
helping
us
evaluate
algorithms
for
topology,
aware
routing.
M
N
D
N
N
As
you
could
say,
one
request
is
sent
from
a
zone
to
a
service
along
that
request
and
something
assigned
back
to
the
same
zone
where
it
came
from
where
it
came
from
so
but
meanwhile
don't
want
to
fly
over
win
any
endpoints
with
too
much
traffic
and
we
decide
to
create
a
automatic
shield
to
evaluate
our
different
algorithms
play
with
different
data.
Some
like
a
very
short
story
before
this
is
that
we
got
a
google
spreadsheet
to
evaluate
a
single
algorithm
with
very
simple
input,
which
we
think
is
not
enough
at
all.
N
So
we
created
this
automatic
cure
which
can
like
let
us
play
with
more
realistic
data
and
with
more
algorithms,
we
can
evaluate
the
practice
and
compare
them
and
improve
them,
and
basically
we
like
abstract
some
key
concepts.
Incubators
include
the
endpoints
and
print
slices
and
zones
and
use
the
algorithms
to
output.
The
distribution
of
endpoint
sizes,
and
then
we
evaluate
those
algorithms
with
some
metrics.
We
we
think
I'm
important,
which
will
go
into
that
later
in
following
slides
and
here's.
The
key
happening
feel
free
to
play
with
that
shoe
and
next
piece.
N
Before
going
to
the
like
the
metrics,
we
have
to
mention
two
assumptions
we
use
for
simulation
wisely.
Traffic
will
be
proportional
to
a
number
of
nodes
or
number
of
calls,
or
com,
computer
and
computational
resources
in
a
zone,
and
the
second
assumption
is
that
capacity
will
be
proportional
to
a
number
of
some
points
in
a
single
zone
and
feel
free
to
challenge
or
question
these
assumptions.
N
H
N
Percentage
say
say
how
much
traffic
stays
in
the
same
zone.
We
want
to
keep
as
much
traffic
as
possible
stay
in
the
same
song.
So,
but
meanwhile,
we
do
not
want
to
like
overbody
any
endpoints
with
too
much
traffic,
that
is
to
say,
a
good
algorithm
should
evenly
distribute
traffic
across
all
endpoints
and
last
but
not
least,
we
all
know
that
there's
a
price
when
we
create
an
endpoint
slide.
So
we
want
to
reduce
the
overhead
with
our
axons,
so
it
works.
N
N
The
first
acquisition
we're
working
on
is
clay
overflow
sizes,
that
that
is
to
say,
when
a
zone
has
actual
points
and
expected
that
to
compare
to
the
reshock
no
zone
has
where
assign
those
extra
points
to
a
global
slide,
that
every
zone
can
send
traffic
to
their
slides
and
other
endpoints
stays
with
the
same
zone
in
the
in
a
local
endpoint.
Slides
that
is,
they
can
only
receive
traffic
from
the
local
zones.
Blank
queue
and
the
second
algorithm
is.
H
N
It's
like
way
manually
assigned
points
to
make
that
distribution
match
the
ratio
of
number
of
nodes
in
every
zone,
and
that
is
very
rough
ideas
right
now
and
we
are
trying
to
improve
those
algorithms
well
with
our
tools
and
as
well
find
some
new
algorithms
and,
let's,
let's
look
take
a
look
at
how
our
input
and
output
look
like
the
left
hand.
Side
is
the
input
file.
We
use
it's
accessory
file
that
with
different
combinations
of
zones.
N
The
first
column
of
the
zone
is
the
number
of
nodes
and
seven
columns
on
the
number
of
endpoints
and
there.
This
is
the
input
file
we
use
for
created,
like
for
some
representative
data
set
and
the
right
hand
side
is
the
diagram
created
based
on
our
output,
says
with
bio
waiting
for
that
here
you
can
find
the
output
file
in
the
github
and
just
give
you
a
very
rough
idea.
How
that
looks
like.
Let's
take
an
example.
Look
at
the
only
one
endpoint
data
input
has
a
very
low
score.
N
That
is
quite
intuitive
because
they
only
score
one
point:
that
we've
got
three
zones,
so
there's
no
way
that
you
can
got
a
very
high
scoring
in
zone
traffic
score
or
like
since
it's
a
local
algorithm.
You
can
user,
get
very
good
task
distribution,
because
some
of
the
traffic
can
go
to
their
endpoint
sizes,
can't
go
to
that.
Endpoint,
slides
and
yeah.
That's
pretty
much
about
what
we're
doing
right
now.
N
Here's
the
github
link
feel
free
to
play
with
this
duo
and
give
us
feedback
suggestions
or
some
new
ideas.
Whoever
happy
to
have
you
working
with
us.
Yes,
thanks.
E
Yeah,
that
was
a
great
great
presentation.
One
thing
I'll
add
is
yeah
like
we.
E
We
would
love
additional
ideas
for
algorithms,
the
algorithms
we
have
right
now,
we're
mostly
to
test
the
tool
and
ensure
that,
given
an
interface,
we
could
score
an
algorithm
appropriately,
obviously,
both
of
the
algorithms
we're
implementing
right
now
have
holes
in
them,
but
we
would,
I
know,
there's
probably
plenty
of
ideas
out
there
for
how
we
could
approach
this,
and
now
we
actually
have
a
way
to
just
plug
that
into
an
interface
and
score
the
result,
with
a
wide
variety
of
output
inputs.
So
yeah,
it's
a
really
cool
project
yeah.
So.
J
Okay,
so
I
think
this
is
awesome
and
I
especially
love
the
idea
that
we
can
try
out
different
algorithms.
What
is
the
plan
to
like?
When
do
we
turn
something
like
this
into
a
cap,
because
my
sort
of
gut
feeling
is
more
or
less?
We
can't
do
worse
than
we're
doing
today.
E
Yeah,
well,
you
would
be
surprised
there.
We
have
found
ways
to
do
worse
than
we
are
doing
today,
but
yes,
I
I
agree
with
that
concept.
I
I
would
love
to
see
this
get
into
a
cap
in
in
time
for
the
120
release
cycle
that
that
has
been
the
goal.
I'd
like
to
have
a
few
different
algorithms.
E
I
think
both
of
the
algorithms
we've
implemented
so
far
don't
make
sense
on
their
own
I'd
like
to
get
an
algorithm
where
a
little
bit
a
little
bit
closer
to
happy
with
before
we
turn
this
into
a
cap.
It
doesn't
need
to
be
a
perfect
algorithm,
but
right
now
we
have
an
algorithm
that
works
well
for
small
sizes
and
an
algorithm
that
works
well
for
a
large
scale
and
not
one
that
does
particularly
well
for
both.
E
J
F
E
Yeah,
so
this
is
this
is
not
a
perfect
example
at
all,
but
the
the
largest
example
we
have
is
this
somewhat
balanced,
huge
and
huge
is
maybe
not
the
best
term
here,
but
this
indicates
300
nodes
in
zone,
one
with
654
endpoints,
300
nodes
zone,
2
with
704,
etc.
E
E
Yeah,
so
this
this
is,
this
is
not
actually
interacting
with
the
kubernetes
api.
This
is
just
simulating
what
the
the
api
resources
would
look
like.
A
A
Yeah
that'd
be
great
cool,
so
I
think
we
should
try
to
move
somewhat
quickly
through
these
next
topics.
Colonel
time
you
are
next,
can
you
five
minutes.
H
Yes,
so
we
have
been
discussing
this
issue
on
the
mailing
list
around
adding
some
new
capabilities
into
how
dns
is
managed
in
the
internal
network
within
the
key
test
primitives-
and
I
think
the
last
update
was
if
there
are
no
other
better
suggestions,
we
should
look
into
working
on
a
cab
to
make
a
formal
proposal
to
how
things
should
change.
I
just
wanted
to
bring
it
up
in
this
meeting
and
if
somebody
wants
to
help
out,
they
are
most
welcome.
J
H
I
think
the
first
one
is
a
clear
winner
having
a
default
service
backing
a
namespace,
I
think
that's
a
much
wider
audience
that
seems
like
there
might
be
other
implications
that
are
not
obvious
versus
just
adding
a
sub
domain
that
can
be
routed
to
a
service.
That's
my
instinct.
H
I
would
if,
if,
if
you
feel
that
we
should
do
one
and
two
as
one
cap,
I'm
fine
with
that,
otherwise
we
can
split
that
up
and
see
they
both
are
very
interesting,
and
I
think
they
both
offer
very
different
set
of
features
as
people
map
tenancy
into
name
spaces.
The
ability
to
have
a
deployment
or
a
service
that
backs
a
namespace
dns
address
seems
like
a
powerful,
primitive
and
it
it
happens
to
solve
the
problem
we
are
facing.
H
I
don't
think
it
is
necessarily
it's
not
necessarily
targeted
to
solve
the
problem
that
we
have,
but
it's
an
interesting
idea,
the
other
one
where
a
service
can
host
other
sub
domains
in
the
dns
entry
com,
somewhat
analogous
to
what
external
names
does
is,
is
more
geared
to
just
solving
the
dns
problem.
So
that's
my
extensive
feel
for
this.
H
J
So
do
you
do
you
think
so
sorry,
there's
the
the
one
option,
which
is
to
have
a
default
name,
a
default
ip
for
the
namespace
name,
which
the
the
main
reason
I
like
it
is
because
it
maps
to
dns
the
way
dns
works
pretty
straightforwardly
and
you
don't
have
to
add
another
level
to
the
dns
names,
but
it
is
sort
of
a
more
potentially
impactful
change.
J
The
other
answer
is
just
to
have
a
while
effectively
a
wild
card,
and
it
is,
it
is
also
pretty
straightforward
to
implement
and
it's
less
impactful
one
of
the
main
differences
between
the
two
is
in
the
default
for
the
namespace
model.
You
have
to
create
a
specific
external
name
record
for
every
bucket.
You
want
to
serve
and
in
the
wild
card
case
you
don't
does
that
matter.
H
The
if
the
external
name
itself
can
have
a
wildcard,
so
if
you're
defining
an
external
name,
you
can
give
service
name
that
and
then
route
that
to
the
stateful
set
as
an
example.
In
our
context,
I'm
sorry
for
the
background
noise,
so
that
might
also
be
a
feasible
idea
per
se.
J
I
don't
think
that
that
holds
up
as
well.
I
think
if
we
were
to
do
a
wild
card,
you'd
want
to
put
it
on
the
service.
J
So
on
your
on
your
main
service,
which
is
saying
I
want,
I
don't
want
the
normal
child
behavior.
I
want
this
wild
card,
child
behavior
yeah,
and
so
then
you
I
mean
the
good
news.
Is
you
only
have
one
resource
and
it
has
the
wild
card
flag
on
it?
Yeah.
I
don't
know
if
the
number
of
resources
is
actually
a
value
or
not
like
is
there
information
that
you'd
want
to
attach
to
each
individual
external
names.
H
So
the
only
thing
I
can
think
of
is
that
if
there
are
monitoring
of
management,
solutions
or
security
solutions
that
eventually
get
built
for
mapping
services
names
without
wildcards
to
metrics
or
policies,
then
the
having
the
wildcard
might
be
more
intrusive
because
it
kind
of
breaks
it
versus
just
having
explicit
records
being
or
resources
being
created,
not
necessarily
new
cluster
ips,
but
at
least
dns
records
being
created
for
every
new
entry.
H
That's
the
only
thing
I
can
think
of
from
a
pragmatic
standpoint,
both
work,
fine,
I
don't
have
a
strong
preference
one
way
or
the
other.
J
Okay,
let
me
I'll
send
one
more
quick
note
on
this
to
the
mailing
list,
just
so
that
we
capture
this
distinction,
which
I
guess
I
didn't
really
think
about
before,
and
then
we
can
proceed
to
either
a
single
cap
or
a
split
cap
where
we
make
a
decision
between
the
two
paths.
G
You
can
go
fast
because
I
know
there's
another
one
after
me:
there's
the
tcp
close
way.
That's
not
super
critical,
but
I
I
was
just
kind
of
asked
what
the
future
of
that
test
was
going
to
be,
whether
it's
going
to
be
improved
or
not,
and
then
the
other
one
was
that
I
think,
is
a
little
more
important
because
I
always
use
it
for
diagnosing
things
and
have
for,
like
you
know
forever
as
the
intra
pod.
If
we
could
do
like
pulling.
G
But
I
look
I
mean
I
look
back
at
the
way
the
e2e
framework
is
working
nowadays
and
it
looks
like
it
fails
automatically
and
if
we
could
breadth
first
poll,
then
we
could,
at
the
end
of
the
intra
pod
tests,
which
I
think
are
kind
of
the
number
one
diagnostic
for
knowing
if
networking
is
totally
broken.
G
So
if
those
are,
if
we
breadth
first
poll,
then
we
could
very
easily
know
when
somebody
submits
a
bug
whether
their
networking
is
just
totally
broken
or
whether
one
node
or
two
nodes
is
down
and
the
rest
of
their
cluster
is
up.
And
so
on
is
that
I
know
that's
terse.
I'm
trying
to
talk
quickly,
but
does
anybody
know
what
I'm
talking
about
there.
G
Okay,
okay,
you
know
what
I'll
just
type
it
into
zoom
and
then
we
can
go
to
the
next
next
person
because
I
know
there's
more
questions.
E
Yeah
this
is
this
is
really
quick.
There
there's
a
link
in
the
agenda
too
service
api's,
pre-alpha
review
for
anyone
who
likes
to
review
apis,
especially
large
apis.
We
are
looking
for
api
reviews
at
this
point.
E
This
is
a
full
google
doc
actually
of
the
api
and
go
types
along
with
caveats
at
the
top
that
this
is
still
in
progress
and
I've
linked
what
we're
currently
still
working
on,
but
I
think
it's
ready
enough
for
some
api
review
and
I
would
appreciate,
I
know,
there's
some
people
on
this
call
that
are
excellent
at
api
reviews.
E
We
are
hoping
to
get
an
initial
alpha
out
in
august
and,
as
you
know,
we're
in
august
already
so
we'll
see
where
that
goes,
but
we
would
love
any
feedback
at
this
point.
The
reason
it's
a
google
doc
is
this
is
all
on
github,
but
really
the
only
way
to
review
there
would
be
the
incremental
prs
we've
been
making.
E
J
Hey
all
so,
we
haven't
even
crossed
the
bridge
for
kubecon
virtual
eu,
but
I
got
an
email
from
nancy
at
cncf
asking
if
we
want
to
do
a
maintainer's
track
session
for
kubecon
n,
a
which
is
also
virtual,
so
specifically
like
a
sig
network,
intro
and
or
deep
dive
session.
Because
of
all
the
chaos
around
europe
and
the
everything
else
that
happened.
Bowie
and
I
went
ahead
and
recorded
that.
But
who
wants
to
do
the
one
in
north
america,
volunteers?
J
We
don't
have
to
do
one,
but
I
would
assume
we
want
to
so.
We
don't
have
to
answer
now.
But
if
you're
interested
in
helping
produce
the
material
I
mean
we
can
always
reuse
material
but
to
help
present
and
record
the
session
drop
me
a
note
on
slack
or
email
or
twitter
or
github
or
whatever,
and
tell
me
that
you're
interested
and
I
will
be
happy
to
help
loop.
You
in.
B
I
want
to
just
add
to
that
that
if
it
sounds
intimidating
keep
in
mind,
this
is
not
a
live
delivered
thing.
Therefore,
you
could
do
what
we
did
for
the
home
session
for
eu,
which
is
two
maintainers
you
know
recorded,
and
then
we
got
other
maintainers
to
watch
the
video
and
give
us
ideas
of
things
we
forgot
to
say,
and
I
added
them
in
as
pop-up
video,
so
you
can
totally.
G
B
G
J
Yeah
or
you
can
do
it
the
way
bowie,
and
I
did
this
one,
which
is
we
used
the
cncfs
tool
and
you
get
one
shot
at
recording.
It
hope
you
don't
flub
and
it
was
fine,
but
it's
not
gonna
be
nearly
as
creative
as
what
I'm
hearing
from
other
people.
J
So
anyway,
it's
really
not
that
daunting.
Super
friendly
audiences
are
sick
in
real
life.
Our
sig
updates
are
always
very
well
attended
and
they're
a
lot
of
fun.
So
if
you're
haven't
done
one
before
and
you
feel
like
you're
you're
comfortable
with
the
material
and
you
want
to
step
it
up
or
you
want
to
force
yourself
to
become
more
comfortable
with
the
material.
J
A
Great,
thank
you,
everybody
that
is
the
end
of
our
agenda
with
three
minutes
to
spare.
So
thanks
everybody
for
coming
and
we'll
see
you
all
in
two
weeks.