►
From YouTube: Package ThinkBIG: December 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
package
monthly
think,
big
session,
where
we
talk
about
some
ideas
that
aren't
directional
directly
actionable
on
our
roadmap,
but
that
are
worth
talking
about.
We
have
a
few
items
on
the
agenda
today
and
we're
trying
to
lead
more
with
questions.
I
feel
like
the
last
few
of
these
they've
been
more
presentational
in
nature
of
ian
and
I
presenting
so.
We
put.
We
just
asked
three
questions
today
and
I
thought
it
would
be
good
conversation
starter.
A
So
the
first
question
is:
how
do
we
improve
the
overall
quality
of
our
of
our
product?
And
how
do
we
measure
this
and
then
ian
you
had
the
the
first
bullet.
B
One
of
the
things
that
I
think
about
when
I
think
of
a
quality
product
is
the
onboarding
experience.
So
how
easy
is
it
to
set
up
and
how
easy
is
it
to
connect
to
all
of
my
projects?
And
things
like
that?
I
think
if
we
want
to
improve
the
quality,
at
least
from
an
experience
standpoint,
that
could
be
something
we
could
investigate.
B
Well,
no
thoughts
on
that.
The
next
point
I
have
is
we
could
measure
in
terms
of
the
quality
of
the
product,
how
many
support,
calls
or
tickets
end
up
being
impacted
by
on
the
support
team
in
terms
of
how
sorry
struggling
to
get
on
boarded
if
they're
having
issues
things
like
that
and
start
to
measure
how?
How
much
we're
growing
compared
to
how
many
support
calls
do
we
get.
A
A
A
C
A
C
I
feel
like
I
wonder
if
there's
a
way
to
because,
like
there's
also,
I
I
see
two
different
types
of
quality,
almost
where
we
have
the
like.
What
is
this
package
feature?
Can
I
play
with
it
and
use
it
and
figure
it
out,
and
then
the
I'm
an
enterprise
company
and
I
have
a
completely
different
set
of
expectations
and
problems
that
does
not
necessarily
like
they're,
not
in
the
same
realm,
for
the
most
part.
A
I
think
so
because
we
differentiate
in
our
usage
funnel
the
way
that
we
look
at
metrics.
We
kind
of
see
as
like
the
the
acquisition
phase
is
the
first,
the
former
of
what
you
mentioned,
where
someone's
evaluating
the
product.
Maybe
it's
a
manager
like
we're.
I
mean
we're
meeting
with
a
customer
today
where
it's
a
manager,
who's
kind
of
had
their
team,
evaluate
the
product
and
then
they're
bringing
feedback
to
us
compared
to
the
customer.
We
were
talking
about
earlier,
where
they're
building
tens
of
thousands
of
packages
or
images
a
day.
A
C
Well-
and
I
think
the
example
I
thought
was
with
like
thinking
about
support
tickets
like
like
some
support
tickets,
are
just
oh,
you
need
to
configure
this
differently
versus.
Oh,
this
is
a
complicated
problem
that,
like
none
of
us,
understand
yet
like
understanding
like
that,
there's,
maybe
a
different
weight
of
impact
of
those
different
types
of
tickets
or
bugs
that
are
opened
up
and
and
how
would
those
weigh
into
measuring
quality.
A
C
I
do
like
what
what
and
what
you
mentioned
about
just
like
the
onboarding
experience
in
general,
like
I
feel
like
for
me
as
a
developer
that
usually
sells
a
product
and
their
reputation
to
me
is
was
that
first
interaction
super
amazing,
or
did
I
struggle
through
it.
B
First
time
experience
has
a
lot
of
weight
on
how
people
view
a
product,
and
so
I
think,
looking
at
the
onboarding
is
really
important,
but
for
package
it's
uniquely
important
because
we
should
be
an
invisible
feature
set
if
everything
in
the
package
registry
is
working
like
it's
supposed
to.
Nobody
comes
to
visit
the
ui.
Nobody
should
think
about
it
again,
so
our
biggest
opportunity
by
a
lot
is
going
to
be
the
onboarding.
So
if
the
onboarding
is
amazing
as
a
focus
and
then
we're
stable
after
that,
we'll
have
a
really
good
retention
rate.
B
Actually,
I
had
a
really
good
one
where
it
was
netgear
and
granted,
though
most
of
the
application
is
designed
for
onboarding,
but
it
very,
like
naturally,
walked
me
through
the
process
of
this
is
how
you
install
the
modem.
This
is
how
you
add
the
mesh
network.
This
is
how
you
set
it
up
all
these
things,
and
so
it
was
just
a
soft
entry
and
I
will
probably
never
use
the
app
again
unless
there's
a
problem,
but
I'm
still
going
to
remember
that
the
brand
had
that
good
experience.
D
I
think
here
we
really
have
two
users
right
I
mean
we
have
like.
D
You
know
if
you
think
about
self-managed,
there's
admins,
who
set
up
and
use
package
registry
container
registry,
and
then
you
have
the
people
who
actually
consume
the
the
setup
product.
So
there
is
kind
of
a
two
tier
of
users.
A
C
And
for
me,
when
it
comes
to
like
the
the
companies
that
I
thought
of
when
you
mentioned
that
and
any
experience
I
have
it's,
can
I
go
through
the
documentation
or
tutorials
and
just
copy
and
paste
and
things
work,
and
they
do
generally
what
I'm
going
to
expect
like
so
for
packages
it?
You
know
it
says:
oh
you
have
to
connect
to
the
registry.
Can
I
just
copy
and
paste
that
command
and
connect
to
the
registry,
maybe
just
replacing
the
password
and
then
it's
like?
Oh
now,
you
can
publish
your
package.
A
That's
a
great
point
that
makes
me
well,
I
was
wondering:
do
you
think
we
should
have
more
example
projects?
I
know
like
early
on.
We
had
a
maven
example
and
we
have
an
example
for
the
generic
package
registry.
Do
you
think
we
should
start
to
link
to
sort
of
example,
projects
for
each
format
in
the
documentation
with
like
and
here's
an
example
of
a
project
that
builds
a
cone
in
package
or
are
the
docs
enough?
Is
that
is
that
too
much.
C
A
So
template
more
more
templates
is
a
good
idea
too.
Like
we
had.
I
know
sarah
and
nathan
worked
on
that
npm
template
that
you
could
go,
and
you
could
just
populate
that
as
your
gitlab
yaml
file
and
it'll
automatically
like
build
and
publish
your
package,
we
should
we
could
have
that
for
other
formats.
A
D
Thinking
about
docs
before
we
get
too
far
away
from
it.
I
think
one
of
the
problems
that
I
have
noticed
in
the
past
container
registry
is
that
there
are
docker
docs,
there's
the
handbook
docs
and
then
there's
the
docs
to
our
gitlab
repository
of
the
container
registry.
So
maybe
we
should
try
to
unify.
Obviously
we
can't
ask
doctor
to
take
down
the
doctors,
but
maybe
we
should
try
to
unify
our
dogs
a
little
better
so
that
they're
in
a
central
location
it
might
help
the
experience
of
learning.
A
C
On
that
line,
actually
one
of
the
users
that's
been
partaking
in
some
of
the
discussions
in
the
issue
with
the
dependency
proxy
mentioned
that,
like
hey
when
we
run
ci,
if
we
use
an
image,
that's
in
the
gitlab
registry,
it
automatically
can
authenticate
against
that
without
us
having
to
enter
any
credentials.
C
Why
can't
we
just
do
that
with
a
dependency
proxy
too,
and
I
feel
like
I
was
like
my
response
was
like.
That
would
be
a
much
better
experience
to
not
have
to
deal
with
credentials.
C
Right
or
to
that
docker
registry,
to
an
extent
and-
and
I
kind
of
feel
like
the
same-
goes
for
packages
because
we
could
identify,
we
know
all
of
the
package.
Urls
start
with.
You
know
like
api
slash
packages
npm
or
whatever,
so
we
could
say
if
we
see
something
like
that,
that's
a
get
request
that
looks
familiar.
C
Can
we
just
automatically
allow
that
without
necessarily
worrying
about
them,
having
to
do
the
npm
config
within
the
ci
file?.
A
A
C
I
don't
know
I've
never
used
our
documentation.
From
that
perspective,
almost
I
mean
I
guess
I
kind
of
have
one
of
like
when
I've
first
played
with
a
package
manager
that
I
didn't
have
any
part
in
implementing.
C
I
have
to
go
through
the
docs
and
follow
it
and
I
think,
for
the
most
part,
the
pieces
I've
worked
through
have
worked,
like,
I
think
my
experience
in
trying
nuget
composer
and
pi
pi.
For
the
first
time
I
followed
the
docs
and
I
was
able
to
publish
a
package.
I
don't
recall
having
any
points
of
frustration.
A
Yeah
I
I
agree.
I
was
thinking
that
our
docs
do
a
really
good
job
of
get
of
that
first
use
case
that
you
were
talking
about
earlier
of
someone.
That's
just
testing
it
and
wants
to
like
deploy
a
simple
package,
but
it
doesn't
necessarily
do
a
good
job
of
the
enterprise
person
who's
going
to
like
have
a
much
more
like
rigorous
use
case.
I
don't
think
we
really
cover
too
much
of
that
in
the
docs,
but
I
don't
know
if
that
would
be
too
far
like.
A
Maybe
this
new
campaign
that
we're
running
of
seeing
is
believing
where
people
in
the
community
sending
share
examples.
Maybe
we
could
use
that
to
get
some
more
relevant
examples
of
projects
and
documentation
from
folks.
A
I
think
this
is
something
that
happened
recently
when
we,
I
think
david,
you
started
creating
the
package
factory
and
we
started
using
like
not
just
hello
world
packages
right
like
we're
actually
using
packages
that
pull
dependencies
that
are
larger
in
size
and
things
like
that
too.
Right.
F
Well,
I
created
a
a
small
tool
that
we'll
just
quickly
build
the
dummy
package
and
upload
it
for
all
the
package
types
we
have,
so
we
can
put
that
in
a
shell
loop
and
we
can
quickly
populate
a
project
with
hundreds
of
packages,
but
then
to
have
a
more
complex
situation
where
a
package
needs
another
one
or
another
one
from
another
project.
This
is
still,
you
still
need
to
do
it
manually.
G
G
G
I
think
yes,
especially
because
the
factories
have
some
limitation,
that
we
can't
really
overcome.
So,
for
example,
when
you
start
you
want
to
have
pipelining
for
comic
url
and
this
kind
of
stuff,
they
don't
really
the
factory,
don't
really
generate
and
data,
so
both
in
development
and
in
desk.
We
we
are
better
off
with
the
real
deal.
I
know
that
it
makes
stuff
much
more.
F
F
A
F
Actually,
now
that
you
are
talking
about
pipelines,
we
should
duplicate
all
the
scenarios
and
I
have
all
the
same
commands
but
executed
in
a
pipeline
or
by
a
ci
job.
I
don't
even
know
if
we
can
do
that
in
inquiry
in
the
eqa
tests,
but
that
could
be
a
really
nice
coverage.
G
G
A
That's
a
good
point
about
performance,
and
should
we
do
we
have
any
performance
tests
that
we
run
regularly.
G
It
def
depends
on
how
deep
you
want
to
go
in
the
performance
like
if
you
want
to
load
the
full
page,
and
we
talk
with
the
backend
and
see
and
say
that
the
page
is
always
loaded
before
in
less
than
two
seconds
or
something
like
that.
I
think
that
will
be
valuable,
but
it's
also
very
hard
to
make.
If
we,
I
think
we
do
have
something,
but
I'm
not
sure
how
much
is
integrated
in
our
pipelines
for.
A
Okay,
so
some
things
that
came
out
of
that,
just
to
sum
up
are
definitely
making
the
onboarding
experience
easier.
Making
some
templates
make
making
sure
the
templates
work
in
our
documentation,
maybe
ex
having
similar
templates
for
what
we
do
for
generic
registry
for
other
formats
and
some
maybe
improve
testing
and
performance
testing
so
the
next.
The
next
question
is
now
that
we're
seeing
increased
adoption.
A
Yeah
and
and
if
there
are
any
other
concerns
that
come
up
because,
for
example,
we
saw
that
customer
with
27
000
composer
packages
the
other
day
and
I
think
for
some
customers
that
they
have
hundreds
of
thousands
of
maven
packages,
for
instance.
A
And
so
I'm
wondering
are
there
performance
things
that
we
need
to
address,
or
are
there
usability
things
that
we
should
think
about,
or
just
it
seems
like?
Our
circle
of
concerns
has
grown
with,
as
adoption
has
grown.
So
that's
I'm
just
trying
to
get
to
the
see
if
any,
what
your
thoughts
are
on
that.
A
Yeah,
I
agree
with
that.
Are
there
any
storage
policies
like
let's
say
you
have
100
100
000,
just
using
a
round
number
100
000
packages
in
your
prop
project?
Maybe
not
all
of
those
are
immediately
relevant.
Is
it
possible
to
have
things
like
kind
of
in
cold
storage,
or
you
know,
for
things
that
are
like
over
a
year
old
or
something
like
that
with
archaic
or
archived
yeah.
C
I
like
that
idea,
I
think
we
could.
We
could
potentially
create
an
archive
flag
for
a
given
package
and
then
update
our
searches
to
ignore
archived
packages
which
would
reduce
the
number
of
packages
that
are
returned.
G
G
C
Yeah,
that's
possible.
I
like
the
idea
of
having
like
in
in
that
sort
of
cleanup
policy
having
the
ability
to
say,
like
you
know,
delete
these
packages
or
archive
these
packages.
A
G
The
only
thing
about
archiving
is
that
it
doesn't
save
storage
costs
right
now
for
package
that
there
are
a
few
bytes.
This
doesn't
matter
much,
but
I
think
we
have
some
package
format
that
can
go
up
to
gigabytes.
A
G
We
learned
recently
so
moving
them
to
archive
we'll
still
use
that
amount
of
storage,
and
this
could
be
counterproductive,
because
now
the
user
can't
search
them
anymore
can
discover
them
anymore.
Unless
the
archive
becomes
something
like
a
beam
and
then
after
30
days,
everything
is
flushed
out,
but
I
don't
know.
A
Okay,
it's
an
interesting
idea
and
we
have
some
research
planned.
I
know
I'm
talking
to
a
couple
of
customers
next
week
about
the
package
cleanup
policy,
so
this
archive
idea
is
something
I
could
bring
up
during
those
conversations
too
and
see
if
people
would
be
interested
in
it.
Although
they
may
not
understand
the
concern
about
the
performance
of
having
you
know,
hundreds
of
thousands
of
packages
in
their
project,
okay
and
nico-
you
have
the
next
point
on
this
one.
G
Yeah,
so
I
think
that
the
more
the
more
stuff
we
have,
the
more
we
need
to
en
enrich
the
ui
for
allow
to
search
and
sort
as
fast
as
possible.
G
I
don't
know
fancy
searching
the
name
or
or
like
sorting
for
all
points
of
all
kind
of
field
or
like
separating
a
list
that
says
this
is
the
latest
10.
This
is
the
10
most
used,
I
don't
know
mostly
loaded
or
we
will
need
to
start
working
in
this
kind
of
venues.
In
my
opinion,.
A
One
thing
that
I've
seen
in
our
competition
in
artifactory
and
sonotype
is
they:
have
they
basically
support
custom
query
languages
in
each
of
their
products,
so
you
can
it's
almost
like
their
own
version
of
sql.
It's
really
ugly
and
I
always
wondered
why
they
do
it.
But
I
I
guess
we
do
have
this
problem
where
there
are
many
different
fields
that
someone
may
want
to
search
on,
that
we
may
that
you
want
to
give
them
the
ability
to
that.
You
don't
may
not
want
to
just
generally
support.
G
My
idea
is
that
we
could
use
them
something
similar
to
what
is
the
issue
in
mr
search,
which
is
basically
a
search
that
you
can
build
with
pieces
right
and
then
we
could
like,
if
you
say,
select
type
package
npm.
We
could
load
all
the
type
coming
from
npm
so
that
we
don't
give
to
the
user
something
really
dangerous.
That
is
like
a
sql
search
and
we
have
a
ui
that
is
familiar.
A
G
B
Okay,
I
especially
like
that
it's
our
it's
a
really
familiar
familiar
pattern,
and
one
thing
we
could
do
is
if
we
build
the
basics
of
the
data
that
we
have
now,
we
could
test
it
with
users
to
a
check
and
make
sure
it
works
and
b
ask
them
what
other
data
points
would
be
relevant
to
their
team,
and
then
we
can
discover
from
hearing
from
our
users
if
we
need
to
add
the
ability
to
search
in
custom
properties
or
if
there's
anything
else,
we
should
look
at
other
than
what
we're
already
getting.
A
So
would
what
has
to
be
true
for
us
to
support
that
sort
of
do
we?
Do
we
need
a
more
complicated?
Like
I
know,
for
the
registry,
we
have
the
container
register.
We
have
that
bulk
delete
api,
where
you
could
pass
in
different
types
of
parameters.
Do
we
need
a
similar
api
for
the
package
registry,
then,
where
you
could
pass
in
like
type?
I
guess
you
can
already
pass
type
but
like
other,
maybe
key
parameters
like
maybe
certain
metadata
or
something
like.
A
G
But
instead
of
a
bulk
delay,
it
isn't
better
to
go
directly
with
cleanup
policies
because
they,
you
know
every
offensive.
Actually,
it
sounds
to
me
like
a
cleaner
policy,
an
unscheduleable,
clean
employees.
A
G
F
Yeah,
but
it's
more
of
a
side
question
in
order
to
better
identify
those
packages
that
are
not
used
anymore,
I'm
wondering
if
we
need
to
display
a
usage
of
the
package.
F
F
On
the
metadata
endpoint,
one
of
the
field
is
the
number
of
downloads
of
the
of
the
package
and
well
right
now
we
are
returning
zero,
which
is
okay
and
it
doesn't
matter
for
for
nugets.
But
I'm
wondering
if
we
should
start
including
some
usage
information
on
the
ui,
so
that
it's
easier
to
identify
those
packages
that
are
too
old
or
not
used
anymore.
A
B
If
we
were
able
to
add
that
to
the
cleanup
policies,
users
would
get
really
excited
because,
instead
of
having
to
depend
on
the
tag
name
or
some
other
metadata,
they
could
say
delete
all
packages
that
have
only
been
used
less
than
five
times
and
is
more
than
three
months
old
and
hasn't
been
included
in
ci.
For
this
long
versus
key
packages
that
have
had
been
pulled
more
than
a
thousand
times
because
they
actually
get
used
things
like
that,
it
would
really
change
the
game
for
users.
A
Okay
and
nothing's
blocking
us
from
doing
that
for
packages
now
right
like
we
basically
would
need
to
just
start.
We
need
to
start
capturing
that
data,
so
we
need
to
build
out
the
model
to
track
number
of
pulls
and
pushes
for
a
given
package
version,
and
then
we
can
have
to
have
some
way
of
displaying
it
and
then
filtering
on
it.
F
Yeah,
I
I
guess
to
display
it
as
a
chart.
We
will
need
to
save
it
with
a
timestamp
so
that
we
can
aggregate
the
number
of
downloads
with
with
the
timestamp,
and
also
this
would
support
the
option
of
the
cleaning
policy
saying
delete
everything
that
has
not
been
used
for
the
last
month
or
things
like
that.
A
Okay,
that's
cool
and
then
steve.
You
had
a
the
next
comment
on
this.
C
I
think
it's
more
of
what
we've
kind
of
been
talking
about,
where
I
was
kind
of
noting
that,
like
some
of
the
problems
that
we
have
run
into
when
it
comes
to
scale
all
resolve
or
all
revolve
around
the
same
like
root
problem
of
traversing,
an
entire
tree
of
groups
and
subgroups
and
all
the
projects
within
the
subgroups
checking
permissions
on
each
project
in
order
to
see
if
you
have
access
to
their
packages
and
then
returning
the
package
with
pipeline
data
with
other
data.
C
That
is
an
expensive
operation.
And
we
do
that,
certainly
in
the
ui
and
then
for
a
variety
of
different
package
manager
commands.
We
do
that
and
I
think
I
think
david
was
chatting
with
me
about
one
such
command
this
morning,
and
I
don't
know
if
there's
a
a
way
like
this
is
probably
getting
a
little
more
on
the
technical
side,
but
like
finding
a
way
to
improve
that
operation.
So
we
don't
continue
to
run
into
it
as
we
expand
functionality,
because
it's
going
to
be
a
common
pattern
of.
A
How
can
we
like
go
through
the
the
list
of
all
the
like
the
the
mo
find
the
most
vulnerable
find
the
most
at
risk
portions
of
the
product,
the
most
at
risk
calls
and
then
address
them
like
one
by
one.
Let's
is
there
a
metric
we
can
use
or
is
it
just
kind
of
knowing
oh,
I
know
that
this
maven
command
calls
four
different
upload
points
or
whatever
like
is
it?
Is
it
just
knowing
what
those
are?
Is
there
a
way
that
we
can
test
that
and
then,
like
programmatically,
start
to
address
it.
C
F
I
open
an
issue
or
team
open
it.
I
don't
recall
about
npm
commands
and
the
latencies
or
time
to
respond,
and
you
can
build
that
out
of
kibana
dashboards
and
it
was
a
kibana
chart
with
all
the
different
npm
commands
or
endpoints,
and
I
think
it
was
the
metadata
endpoint
that
was
having
not
that
great
response
times.
So
there
is
something
going
on
there.
A
F
B
We
are
by
heading
into
the
new
year
planning
some
new
research
and
we
wanted
to.
I
wanted
to
ask
if
any
of
you
had
any
questions,
you
want
to
ask
users
anything
at
all,
so
some
of
those
smaller
things
we
can
squeeze
into
research
we're
already
doing
or
it
could
spur
some
research
to
be
done.
G
So
if
the
research
is
done
on
the
container
registry
site,
it
will
be
good
to
see
if
they
are
going
to
notice
the
performance
difference
and
if
they
are
going
to
notice
that
we
reverse
the
order
of
the
list
recently
so
that
now
the
newest
container
image
is
the
first
on
the
list,
not
the
last
as
it
was
before.
G
It
will
help
us
code
like
if
this
kind
of
stuff
are
perceived
or
if
just
transfer
and
then
like
they
need
to
search
for
the
image
they're
going
to
search
it
and
type.
The
name
and
move
on.
B
Cool
yeah,
we
can
definitely
do
research
on
both
of
those.
One
thing
I
do
want
to
call
out
is
the
asking
them
if
they
felt
like
it
went
faster.
B
We
can
gather
data
on
that
question
after
going
through
a
test,
but
it's
really
unreliable
data,
they're
you're
asking
them
to
recall
something
they
may
not
have
done
for
a
while,
and
so
that
could
either
dramatize
their
memory
that
it
took
forever
and
so
no
matter
what
it'll
feel
faster
or
the
opposite.
It
could
just
feel
even
longer
because
they're
experiencing
it
just
now,
so
I
wouldn't
trust
that
data
as
much.
G
B
Yeah,
I
can't
ask
the
question:
how
do
you
perceive
it
after
just
going
through
this
process?
How
do
you,
how
do
you
currently
perceive
the
rate
of
experience
and
then
we
can
judge
that
and
use
that
as
a
baseline
metric
for
whatever's?
Next,
that's
a
useful
piece
of
data
for
sure
that
was
a
good
question.
C
Yeah
with
regards
to
kind
of
what
we
talked
about
earlier,
with
finding
the
more
like
advanced
use,
cases
to
add
to
our
documentation,
maybe
just
asking
like
if,
if
users
and
customers
would
be
able
to
share
anything
that
they
have
that
they,
like
you,
know,
parts
of
their
workflow
like
how
are
they
using
it
in
a
more
advanced
way
beyond
our
docs?
And
even
if
it's
just
as
simple
as
like
could
could
they
share
their
ci
script
or
part
of
their
czi
script?
And
then
over
time.
A
I
could
do
that
now.
I
could,
because
you
know
we're
at
we're
talking
to
people
on
issues
all
the
time
I
could
just
lead
with
that
question.
Can
you
share
your
script?
Can
you
and
we
could
archive
those
somewhere
and
save
them
as
long
as
they're,
like
you
know,
removed
of
any
personal
information,
and
that
would
be
great.
I
love
that
idea,
but
maybe
a
separate
read
some
separate
research
too.
To
like
talk
to
people
like
we
know
that
there
are
n
number
of
namespaces
on.com
that
have
more
than
a
thousand
packages.
A
A
We
are
at
time,
thank
you,
everyone.
This
is
a
really
great.
I
I
think
ian.
Would
you
say
we
should
try
and
make
these
meetings
more
about
asking
questions,
then
I
know
we'll
have
to
present
things
sometimes,
but
I
really
like
this
format.
I
think
it's
much
better
for
conversation.