►
Description
Greg Wilkins and Chris Walker present:
Jakarta Servlet defines a server-side API for handling HTTP requests and responses. Get to know Eclipse Jetty as a compatible implementation for Jakarta Servlet 5.0. Eclipse Jetty is newly certified and brings exciting implications.
Chris Walker will explain what Jetty is and why they made the change to the Jakarta namespace, as well as the certification process.
Greg Wilkins will spend the rest of his time discussing his latest blog on the latest Servlet Spec 5, which is part of EE9. You can view his blog at the link below.
https://webtide.com/less-is-more-servlet-api/
A
We
have
joining
us
today,
chris
walker
and
greg
wilkins,
to
discuss
eclipse
jetty
for
servlet,
both
chris
and
and
greg
played
large
roles
in
recently
getting
jetty
certified
as
a
compatible
implementation
for
jakarta.
Servlet
chris
is
a
committer
on
jetty
and
greg.
Is
the
eclipse
jetty
project
lead
if
you
have
any
questions
for
either
chris
or
greg
as
we
move
through?
Today's
presentation
feel
free
to
ask
them
in
the
chat
or
using
the
ask
question
tab
without
any
further
delay
chris
and
greg
over
to
you.
B
All
right
I'll
start,
my
name
is
chris
walker.
I'm
one
of
the
committers
on
the
project
team
over
here
for
with
jetty
jetty,
has
a
long
and
storied
history
if
you're
unfamiliar
with
it
been
around
since
1994,
and
we've
always
been
at
the
forefront,
trying
to
push
technology.
B
One
thing
that
we
pride
ourselves
on
here
at
jetty
is
being
kind
of
by
developers
for
developers.
So
one
thing
when
jakarta
or
you
know,
became
jakarta.
We
were
really
keen
to
put
our
efforts
into
that
and
right
now
we
have
three
development
branches.
That
kind
of
reflect
our
push
towards
that
we
have
our
main
jetty
9.4
branch,
which
is
what
most
of
our
users
are
currently
using,
which
still
runs
on
java
8
and
isn't
he
necessarily
focused,
and
then
we
have
jetty
10
and
jetty.
B
B
So
those
are
our
those
where
we
currently
stand
and
we've
actually
seen
where
a
lot
of
our
users,
I
would
think,
would
be
a
little
bit
more
reticent
to
make
the
jump
to
jakarta
e9,
because
it
is
a
lot
of
change.
They've,
actually
skipped
10
and
went
right
to
11,
really
trying
to
embrace
the
technology
embrace
the
opportunities
the
ee9
has
afforded
us
along
with
servlet
5..
So
I
really
only
had
a
little
bit
of
a
brief
introduction.
C
Okay,
so
the
presentation
I'm
going
to
give
today
is
is
basically
a
little
bit
of
a
a
meta
presentation
of
what
we
did
to
get
to
serve
at
5.0,
which
is
a
core
component
of
ee9,
and
I'm
going
to
start
off
with
the
the
big
change
that
we
had
to
do
to
get
to
from
server
4
to
so
that
five
was
the
jakarta,
namespace
change
and
you
know
no
one
really
knows
why
we
have
to
do
it
well,
or
maybe
somebody
knows,
but
they
haven't
told
me
it
was
a
a
a
necessary
evil
and
ultimately,
at
this
point
of
the
view,
point
point
of
time
the
why
we
had
to
change
the
name.
C
Space
doesn't
really
matter
it.
It
is
what
it
is.
It's
it's
it's.
You
know
I
thought
it
could
have
been
a
bit
of
a
name
again.
You
know
disaster
in
the
making,
but
it's
you
know
the
the
glass
half
full
way
of
looking
at
it.
It's
an
opportunity
to
do
things
with
the
api
that
we
haven't
been
able
to
do
for
the
last.
C
C
So
the
big
difference
between
server,
4.0
and
server
5.0
is
only
the
name.
There
were
no
other
feature
changes.
No
api
change
changes,
even
you
know
things
that
are
really
necessary.
Like
changes
in
the
the
cookie
standard.
Well,
the
decision
was
made
to
keep
5.0
completely
in
lockstep
with
4.0
in
everything,
but
the
name
and
that's
basically
to
allow
automated
porting
of
applications
from
4.0
to
5.0.
C
C
You
know
thing
you're
asking
your
users,
I
want
you
to
change
your
application,
but
I'm
giving
you
no
extra
features,
and
so
I
think
that
was
why
there
was
a
bit
of
fear
that
you
know,
maybe
that
you
know
where
we
led
the
users
wouldn't
follow,
but
there
were
important
changes
in
the
step
from
going
from
four
to
zero
two
to
five
and
they
were
mostly
procedural
they're.
C
In
the
way
the
specification
was
produced
and
the
the
processes
we
put
in
place,
who
was
doing
the
changes
and
they
actually
are
really
important
and
set
us
up
for
the
future
they're,
giving
us
an
opportunity
of
making
a
future
specification
when
so,
when
5.1
or
6.0
comes
out
we're
unencumbered
with
a
lot
of
the
procedural
problems
of
the
past,
and
what
I'm
going
to
advocate
here
is
we?
C
Maybe
we
unencumber
ourselves
with
some
of
the
api
craft
of
the
past
as
well
and
start
looking
at
what
we
can
reduce
in
the
service
spec
to
give
ourselves
a
better
future
okay.
So
the
big
change
was
the
of
coming
to
the
eclipse
for
the
jakarta
as
well
as
the
name
is
we
changed?
The
process,
it's
the
eclipse
foundation
specification
process
and
I'm
not
going
to
go
into
very
much
details
of
the
process.
C
C
And
you
know
if
you're
a
member
organization
of
clips,
you
can
nominate
someone
to
be
on
the
spec,
but
you
can
also
participate
fully
as
an
individual,
and
this
is
you
know,
just
exactly
the
same
as
individuals
have
participated
in
the
open
source
projects,
software
projects
at
eclipse
for
for
decades,
they
can
also
can
become
part
of
the
specification
projects
at
eclipse
and
with
all
the
full
rights,
and
you
know,
abilities
of
any
other
committee
onto
those
projects.
C
This
is
actually
a
significant
step
away
from
the
past
processes
which
were
open
and
anyone
could
kind
of
participate,
but
they
were
very
corporate
based
and
and
quite
often
the
the
the
agenda
of
the
specifications
was
driven
by.
You
know
what
targets
we
have
to
hit
for
the
next
java
one.
You
know
conference
and
there'd,
be
you
know,
internal
processes,
and
you
know
things
dumped
on
the
specification
comedians
are
here:
ratify
this
you've
got
two
weeks
and
that
just
doesn't
happen
anymore.
C
So
it's
it's
quite
a
different
process
and
I
think
the
specifications
going
forward
are
going
to
be
better
for
it.
Oh
yeah,
it's
it's
it's
done
by
ballot,
but
there's
not
a
strict
majority.
C
Each
project
can
set
their
own
project
their
own
way
of
voting,
but
the
key
thing
for
me
about
a
ballot
is
that
I've
been
involved
with
this
civil
expert
group
doing
the
specifications
for
the
last
you
know,
15
years
or
so,
but
as
only
once
I
went
to
eclipse,
I
did.
I
ever
get
a
vote
previously,
because
we
didn't
pay
large
sums
of
money
to
the
the
large
corporates
running
the
specification
process.
C
We
never
actually
got
to
have
a
vote,
so
this
is
actually
the
very
first
service
spec
that
I've
ever
actually
put
my
hand
up
and
say
yep.
I
think
this
should
be
released
so
okay,
and
so,
if
we
look
at
who's
on
the
servlet
api
project,
the
who's
on
the
panel
is
is
really
important.
C
It's
a
very
developer,
focused
serverless
expert
group
and
project,
so
we've
got
stuart
douglas
who's
worked
on
undertow
mark
thomas,
very
key
in
tomcat,
now
myself,
very
keen
on
jetty,
arjun
tims
is
in
there.
C
Who's
be,
has
previously
worked
in
glass
fish
and
is
now
working
on
piranha,
a
a
newer
implementation
of
the
service
spec
and
you
know,
and
then
there
are
others
involved,
but
I'm
just
highlighting
these
sport
because
we're
all
key
developers
in
our
server
implementations-
and
this
is
different
from
you
know
previously-
where
there's
people
who
who
haven't
had
such
direct
in
involvement
with
actually
implementing
the
specification,
have
had
key
roles
in
setting
the
standards,
and
I
think
this
this
group
is
working
very
well
together.
It's
the
atmosphere
in
this.
C
The
server
specification
group
is
much
more
collegiate
than
it
has
been
in
the
past.
Okay,
we
haven't
done
any
significant
major
changes
to
the
the
api,
but
we
are
discussing
problems
already
and
we
are
you
know.
I
think
we're
making
good
progress,
and
so,
as
a
team,
I
think
we're
working
very
well
and
hopefully,
when
we
do
start
doing
some
big
changes
that
atmosphere
and
that
experience
that
we've
got
and
that
collective
shared
background
of
dealing
with
the
problems
of
the
existing
apis
will
stand
in
good
stead
for
making
changes
into
the
future.
C
So
I
started
jetty
in
1995
before
there
were
such
things
as
servlets
and
yeah
servants
came
along
a
couple
of
years
later,
so
I've
probably
been
working
on
servlets,
for
you
know,
20
plus
years,
but
it
was
only
in
december
last
year.
I
could
actually
say
that
my
implementation
was
a
a
compliant
implementation
of
the
service
specification.
C
So
we
start
seeing-
and
this
is
not
just
jetty's
experience-
yeah-
here's
a
here's,
a
a
tweet
from
david
blevins
of
tommy.
I
tried
that
you
know
10
years
in
the
makings.
You
know
he
got
certification,
for
you
know
his
project
that
he's
worked
so
hard
on
and
here's
the
corresponding
tweet
from
for
jetty.
I
should
have
yelled
out.
It
was
20
years
in
the
making
before
we
got
certification-
and
you
know,
obviously
lack
of
certification
hasn't
done
us
any
harm
over
the
years.
We're
a
successful
project.
C
Many
people
use
it
big
and
small,
but
I
actually
think
it's
a
two-way
street,
for
you
know,
projects
getting
certified
if
you,
the
more
projects
that
get
certified,
makes
the
standard
stronger
because
people
you
know
they
know
that
you
know
all
the
different
implementations
out
there
have
passed
the
tck.
They
are
going
to
be
more
portable,
more
compatible,
there's
going
to
be
less
ambi
ambiguity
between
the
implementations
and
then
a
stronger
specification
that
results
from
more
people
being
compliant
makes
more.
Implementations
want
to
be
compliant.
C
So
I
think
it's
a
really
important
step
into
making
sure
that
aeee
and
servlets
are
relevant
in
the
future.
Is
by
making
it
reducing
the
barriers
to
you
know
being
compliant.
C
So
we
can
actually,
you
know,
use
this
specification:
okay,
so
11
with
servant
5.0.
Just
what
do
we
got
here
so
yeah?
We
were
when
we
went
to
server
5.0.
We
did
a
power
parallel
release
with
server
gt10,
which
was
server
4.0
and
gt11,
which
is
server
5.0,
and
the
actual
code
in
in
those
2g
releases
is
basically
identical,
except
for
the
names.
C
C
It
does
have
a
few
features
over
server
3.1,
and
so
it's
really
interesting
to
see
that
we
released
10
and
11
at
the
same
time,
jetty
10
with
new
features
and
jetty
11,
with
the
name
change
and
in
the
three
months
I
think
four
months
you
know
december
to
april
from
when
we
released
to
when
we
grabbed
some
stats.
We've
got
85
000
downloads
of
gt11
with
server
5.0,
which
is
about
10
of
our
our
monthly
downloads
or
so
that's.
C
I
think
it's
about
20
000
downloads,
a
month
which
is
10
of
our
monthly
downloads.
So
that's
quite
a
you
know.
You
know
good
interest
in
a
new
named
release
right
off
the
get-go.
Quite
often
you
know
a
big
point.
Release
will
take
a
lot
of
time
to
get
some
momentum
even
without
significant
changes,
because
people
are
very
reluctant
to
go
into
a
dot
zero,
let
alone
a
dot
0.0
and
put
it
into
production.
B
To
jetty
11
too,
I
know
we
ran
stats
on
github
and
saw
hundreds
and
hundreds
of
projects
immediately
upgrading,
which
is
something
I
think
that
really
surprised
us,
because
you
know
a
lot
of
times
open
source
while
it
moves
fast,
is,
like
greg,
said,
a
bit
reticent
at
times
to
pull.
You
know
from
a
zero
zero
release
into
their
project,
so
we
are
seeing
an
uptick
in
community
efforts
as
well
to
integrate
that
into
their
projects.
C
And
I
guess
that
was
also
a
bit
of
a
surprise
for
us
as
well,
because
the
the
github
tooling
is
constantly
evolving
and
so
as
an
open
source
project.
We
we
struggle,
have
visibility
of
who's
actually
using
us.
We
know
our
big
users,
our
you
know
the
the
headline
deployments
of
jetty
and
we
have
you
know
commercial
relationships
with.
You
know
a
number
of
the
next
tier
down
of
smaller
midsize
users
of
jetty.
C
But
then
there's
just
you
know,
literally
thousands
tens
of
thousands
of
projects
out
there
using
jetty
that
we've
never
had
any
contact
with
and
don't
know
anything
about,
and
so
the
tooling
in
github,
which
has
been
set
up
to
pass
down
security
alerts,
and
things
like
that
is
you
know
and
pass
down.
The
new
releases
resulted
in
as
chris
said:
we've
we,
the
the
github
tooling,
is
not
working.
C
So
we
don't
know
the
exact
numbers,
but
you
know
thousands
of
projects
very
soon
after
we
went
to
mg11
doing
the
the
update
and
dealing
with
the
name
space
in
a
very
short
time
and
not
seemingly
having
any
problems
with
the
the
the
renaming
that
need
to
be
done,
and
so
the
stats
which
I
do
have
is.
We
are
again
just
looking
at
the
download
rates,
and
so
you
know
we've
got
gt9.
Is
it
still
our
our
mainstay?
C
A
release
and
80
percent
of
the
88
percent
of
the
downloads
monthly
at
the
moment
is
still
gt9.
That's
people
hitting
maven
central
to
get
the
the
distribution
down.
Only
three
percent
are
going
to
get
10,
which
is
the
new
features
and
there's
an
error
there.
C
Sorry
there's
no
such
thing
as
server
304,
3.1,
and
so
only
three
percent
of
our
users
are
interested
in
in
the
same
name,
space
and
the
new
features
that
are
in
4.0,
but
nine
percent
are
going
to
11
with
5.0.
C
So
it
doesn't
seem
to
us
that
the
you
know,
two-thirds
of
the
people
who
are
upgrading
are
going
for
the
new
name
space.
So
the
new
name
space
doesn't
seem
to
be
a
a
big
issue.
We
don't
know
from
this
whether
people
are
moving
from
nine
to
10
or
11
for
the
new
features
that
were
in
4.0
and
therefore
also
in
5.0
or
whether
they're
just
doing
to
get
onto
a
java,
11
or
just
to
get
under
the
latest.
C
We
don't
have
those
stats
and
feedback
yet,
but
at
least
that
we're
not
seeing
any
reluctance
to
make
changes
to
applications
to
get
to
the
latest
version
of
the
server
okay.
So
that
means
the
takeout
that
I
take
from.
That
is
that
you
know
the
fear
of
doing
a
breaking
change
to
the
api,
hasn't
panned
out
to
be
such
a
big
problem,
as
I
had
feared
that
people
are
able
to
okay,
it's
a
simple
change,
just
changing
every
java.
It's
for.
C
I
want
most
java
x's
to
a
jakarta,
but
they
are
prepared
to
change
their
applications
in
order
to
get
onto
a
later
release,
and
so
this
really
does
make
us
fairly
unencumbered
for
future
development.
Now
we
have
a
process,
that's
fully
open
people
can
participate.
We
have
a
expert
group.
That's
you
know,
got
the
right.
People
on
the
right,
people
are
on
the
bus,
and
so
we've
got
the
evidence
that
breaking
the
api
at
least
a
little
bit,
isn't
a
big
hurdle
for
our
target
audience.
C
So
the
next
question
is
well,
so
if
we're
unencumbered
and
we
can
go
anywhere-
we
like
with
our
bus
for
the
right
people
on
it.
Where
should
we
drive
it?
Where
do
we
take
yo
servers?
Do
we
go
to
a
5.1
and
add
a
few
little
features,
or
do
we
jump
to
a
6.0
with
a
huge
set
of
features,
changes?
C
Well,
the
the
point
I'm
going
to
try
and
push
put
here
is
that
less
is
more
I'm
going
to
make
the
case
the
next
couple
of
slides
that
I
think
the
very
first
step
we
should
be
doing
is:
how
can
we
simplify
the
servlets?
How
can
we
take
out
some
of
the
craft
of
the
last
two
decades?
What
features
can
we
do
get
rid
of
to
clear
the
decks?
So
we
have
a
better
time
and
can
make
a
better
specification
going
forwards
for
6.0.
C
So,
in
order
to
evaluate
that
we
have
to
work
out
well,
you
know
we're
in
a
marketplace
here.
This
is
a
you
know,
we're
giving
our
users
an
opportunity
to
review
whether
or
not
they
want
to
continue
using
serblets
as
the
way
they
serve
their
hp
content.
When
you
break
the
api,
the
the
upgrade
is
not
automatic.
C
They
get
a
chance
to
think
about
it,
so
performance.
Well,
very
few
webex
actually
need
high
performance.
You
know
the
vast
vast
majority
of
web
outset
web
apps
that
get
written
out
there
are
lucky
to
see
tens.
You
know
hundreds
of
requests
or
seconds.
Maybe
you
know
the
few
lucky
ones
are
gonna
get
the
hundreds
of
thousands
of
requests
a
second,
the
millions
of
requests
are
seconds,
but
they
are
the
minuscule
minority
of
applications
out
there.
C
So
very
few
web
applications
actually
need
high
performance,
but
most
web
applications
or
web
application
developers
aspire
to
needing
the
high
performance,
because
you
know
you're
writing
that
application
it
may
you
know
it
may
go
viral,
it
may
take
off
it's.
You
know
if
you
and
if
you
don't
have
the
ability
to
scale
then
you're
already
planning
for
failure.
So
if
you
want
to
plan
for
success,
everybody
thinks
about
performance,
even
if
it's
not
necessary,
so,
regardless
of
the
need
or
otherwise
containers
are
always
going
to
compete
in
performance.
C
C
Is
it
really
key
question,
even
if
it's
not
for
the
final
application
of
vital
issue
and
even
that
for
most
applications,
the
container
is
very
seldom
the
bottleneck.
You
know
jenny
on
a
reasonable.
You
know,
powered
server
can
easily
do
hundreds
of
thousands
of
requests
a
second
and
there's
very
few
databases
or
or
other
infrastructure
that
can
match
that
sort
of
rate.
C
So
there's
a
few
applications
that
can
be
done
at
those
high
rates,
but
most
can't
so
the
container
itself
is
still
even
though
it's
not
the
bottleneck
will
still
frequently
be
benchmarked
on
its
own
to
see
make
sure
it's
not
the
target,
and
this
is
a
good
thing,
because,
even
though
you
we
might
deploy
you
know
rather
than
one,
you
know,
million
request
a
second
application.
We
might
deploy
a
million
one
request
to
second
applications.
C
The
collective
impact
of
any
inefficiencies
in
our
server
can
be
huge.
You
know
some
of
the
high
deployments
high
request
rate
deployments.
We
have
of
jetty.
If
I
can
show
the
nanosecond
or
a
you
know,
millisecond
off
here
or
there,
that's
gonna,
you
know,
have
a
huge
carbon
footprint,
the
the
data
centers
that
are
just
not
doing
those
silly.
C
This
is
going
to
be
bases
on
which
we
can
compete
against
not
only
between
the
server
implementations
but
against
other
non-servlet
non-ie
implementations,
yeah,
and
so
you
can
see
where
I'm
going
here.
Complexity
is
the
enemy
of
performance.
You
know
every
if
statement
costs
you
every
if
and
but
every
option
is
a
cost,
so
simplicity
doesn't
guarantee
performance,
but
it
can
certainly
assist
in
getting
there
the
other
area.
C
We
can
get
a
compete
against
each
other
and
another
non-ee
containers
is
features,
and
you
know
servlets
have
been
around
20
years
and
over
that
time
best
practices
have
changed.
You
know,
bite,
arrays
are
tender,
bite,
buffers
have
turned
to
direct
bite
buffers
and
now
the
the
jdks
are
talking
about
memory
segments
is
the
way
of
of
you
know
talking
here:
io,
drivers,
nice
and
quickly,
styles
change.
The
way
people
wish
to
develop
their
codes
changes.
So
you
have
xml
configuration,
then
configuration
by
convention,
annotations
dependency,
injection
frameworks.
C
You
know
the
the
flavor
of
the
month
year
decade,
changes
and
we've
been
trying
to
support
those
as
best
we
can
within
the
server
specification,
even
down
a
little
language
features.
You
know
enumerations
iterators
streams,
landers
the
module
system
that
all
coming
in
there
so
we've
had
to
you
know
continually
evolve
the
server
specification
not
just
to
deal
with
new
features
of
the
http
protocol,
such
as
you
know,
pushes
and
changes
the
cookies
and
various
other
things,
but
also
just
to
keep
up
with
the
style
of
application
development.
C
Oh
here's,
some
more
examples.
You
know
we
started
off
as
a
blocking
api.
We've
added
asynchronous
support,
but
now
asynchronous
is
becoming
too
difficult,
so
we're
looking
at
more
reactive
style
apis
and
then
there's
the
the
the
promise
of
going
back
to
the
blocking
style,
but
getting
asynchronous
behaviors
with
project
loom.
I've
got
a
blog
where
I'm
a
bit
dubious
on
that
one,
but
you
never
know
we
have
to
whatever
the
changes
come.
We
have
to
keep
pace
with
them
in
the
serverless
api
and
our
implementations.
C
But
the
problem
here
is
that
feature
support
is
an
n
squared
problem,
the
more
different
things
you
have
here,
the
more
it's,
not
a
linear
complexity
increase.
It
goes
with.
You
know
with
the
square
of
the
number
of
features,
and
so
that
you
know,
when
you
just
add
all
I'm
going
to
add
it
to
razer's
world
enumerators.
I'm
going
to
add
you
know
asynchronous
speeches
to
my
io
strings
that
were
blocking
you
have
to
make
them
not
just
work
in
themselves.
C
You
have
to
make
them
work
against
all
the
other
existing
features
that
are
in
the
servlet
api
and
truth
be
told.
I
don't
know
if
we've
done
the
best
job
of
that
in
the
server
api
and
there's
quite
a
few
dark
corners
where
feature
x
me
feature
a
y
with
with
feature
z
turned
on
where
the
spec
doesn't
say
how
it's
meant
to
happen.
C
It's
a
little
bit
ambiguous
and
that's
a
lot
of
the
work
we're
doing
at
the
moment
in
the
the
seventh
api
project
is
finding
those
little
dark
corners
and
going
well
what
was
intended
here.
How
are
we
meant
to
do
this?
Can
we
at
least
all
agree
among
ourselves?
What
should
happen
right,
the
spec
and
and
improve
the
spec,
so
we
can
get
rid
of
the
end
to
get
ambiguity
but
yeah.
C
That's
taking
a
lot
of
our
time
and
effort
which
could
be
better
spent,
improving
our
containers,
improving
performance,
I
mean
implementing
you
better
features,
so
reducing
the
number
of
things
you're
doing
you
know
like.
Can
we
maybe
get
rid
of
the
enumerations?
By
now,
you
know
is
our
asynchronous
abstraction.
The
right
thing
to
have
can
definitely
reduce
the
complexity
and
let
us
stay
more
focused
on
where
we
we're
going
in
the
future.
C
Okay
portability
is
another
way
we're
going
to
compete
with
other
containers.
We
have
portability
between
containers.
It's
really
the
good
power
of
the
adhering
to
standards.
Is
that
if
you
don't
like
what
jetty's
doing
you
can
go
to
tomcat?
If
you
don't
like
you
know,
xyz,
you
always
have
the
option
to
move
to
another
provider
and
there's
nothing
like
a
bit
of
competition
between
providers
to
get
yourself
a
very
good
implementation.
If
there's
no
competition,
you
can
get
a
little
bit
lazy.
If
you
get,
you
know
locked
into
an
implementation.
C
That
can
be
a
very
bad
thing,
there's
also
portability
between
versions.
You
want
to
know
that
when
a
new
version
of
the
standard
comes
out
and
the
containers
come
out
to
support
that
that
you
can
just
upgrade
your
application
without
having
to
rewrite
it
all-
and
so
that's
been-
we've
done
a
great
job
at
that
in
the
server
api
servlets
written
against
1.0.
You
know
20
years
ago
still
run
in
containers
today
and
we've
basically
kept
all
the
features
they
had.
C
C
But
complexity
is
the
enemy
of
portability.
These
n
square
feature
problems.
We
have
means
that
not
only
do
different
containers,
you
know
interpret
some
of
these
feature
clashes
differently,
so
you
can't
go
between
containers
implementations,
but
as
we
work
out
that
there
are
ambiguities
and
fix
them,
then
some
containers
you
get
problems
between
versions.
So
having
this
complexity
and
20
years
of
history
built
into
the
spec,
is
kind
of
hurting
our
our
competitiveness.
C
Against
other
alternatives-
and
you
know
complacency
is
the
legacy
containers
best
friend
in
terms
of
we
rely
on
the
complacency
of
our
users,
that
they
don't
really
want
to
change.
You
know
if
they're
happy
with
if
they
implemented
servlets.
You
know
20
years
ago
or
10
years
ago,
they're
going
to
be
complacent,
they're
not
going
to
go
to
something
different,
so
we
can
be
lazy
ourselves
and
just
churn
out
the
next
release
as
best
we
can
and
pave
over
the
mistakes
and
they'll
stay
with
us.
Well,
we've
broken
the
api.
We've
changed
the
nice
name.
C
C
You
know
standard.
We
want
to
attract
new
users,
new
projects
screen
fields,
projects
should
look
at
servlets
and
ae
and
say
yes,
this
is
the
right
technology
to
base
our
futures
on
yeah.
So
we
put
that
break
in
there.
You
know
is
this
is
a
risk
or
is
it
an
opportunity?
C
Well,
what
we
have
got
to
to
counter
that
break
is
familiar
familiar
familiarity
if
you
can't
be
portable
at
least
be
familiar,
and
so
there's
a
lot
that
we
can
do
to
keep
the
service
spec
looking
and
feeling
like
the
servant,
spec,
but
not
necessarily
being
exactly.
C
The
problem
features,
and
especially
the
clashes,
are
used
by
a
vanishing
small
number
of
applications
and
the
developers
are
unfamiliar
with
those
problematic
features,
and
specifically
the
problem
clashes
I'll
pop
up
a
few
examples
in
the
following
slides,
and
you
know,
my
hope
is
that
a
lot
of
developers
will
go.
Oh,
I
didn't
know
that
I
didn't
know
you
could
do
that.
I
didn't
know
you
had
to
do
that.
C
So
enough
point
by
point
slide
away:
let's
get
into
some
reading
some
some
code
on
as
illustration,
so
here's
a
one
of
the
features
that
I
really
would
love
to
cut
out
of
the
server
specification
is
a
feature
called
object:
identity
which
is
used
when
we
go
through
a
filter
chain
or
use
the
the
dispatcher
request
dispatcher
to
do
a
forward
or
an
include.
C
So
here
we
have
a
server
a
that's
been
implemented.
I
hope
you
guys
can
see
my
mouse
I'll
point
at
things
with
it
server
a
it's
implementing
do
get
and
all
it
does
is
get
the
server
context.
I
get
a
request,
dispatcher
to
serve
it,
be
and
forward
the
request.
It's
got
to
serve
at
the
except
for
it
wraps
it
the
request,
with
its
own
type,
my
wrapped
request
type
and
for
the
purposes
of
this
different
or
this
example.
We
don't
know
what
my
request
type
does,
but
it's
just
wrapping
it.
C
It
could
be
overriding
some
of
methods
so
that,
when
someone
calls,
you
know,
get
input
stream
from
the
request,
they
don't
actually
get
the
raw
input
stream.
They
get
some
processed
one,
which
is
you
know,
converted
or
you
know,
changed
in
some
way,
or
it
may
be
doing
other
nefarious
things
which
we
get
to
in
a
sec.
C
C
It
allows
server
b
to
assume
that
the
type
of
request
that's
passed
in
is
that
it
is
passed
to
this
method
is
exactly
the
type
that
was
passed
into
the
the
forward
or
the
filter
chain
that
did
it
and
so
here's
the
text
of
the
specification.
The
container
must
ensure
that
the
request
and
response
object
that
it
passes
to
the
next
entity
in
a
filter
chain
or
to
the
target
web
resource,
is
the
same
object
that
was
passed
into
the
do:
filter
method
by
the
calling
filtering
filter,
the
same
requirement
of
wrapper
object.
C
Identity
applies
to
the
calls
from
serverless
filter
to
the
request,
dispatcher
forward
and
include-
and
this
is
a
you
know-
one
sentence
and
a
huge
spec
that
has
massive
impacts.
The
utility
of
this
is
so
small.
I
mean,
apparently
people
do
this,
so
they
can
add
extra
methods
to
the
request
object
which
they
can
then
access
in
their
target.
C
You
know
server
and
server
be
a
bad
software
components
they're
highly
coupled
the
server
a
has
to
know
that
if
it's
passing
api
new
api
to
server
b,
it
has
to
know
that
server
b
is
going
to
do
the
downcast
to
get
it
server
b
has
to
know
that
anybody
who
calls
it
has
to
wrap
the
request
in
something
it
can
cast
down
to
to
get
it,
and
so
that
means
that
these
two
components
are
highly
coupled
if
servant
b
was
called
by.
C
You
know
some
other
component
that
didn't
do
the
wrapping
you'd
immediately
get
a
class
cast
exception
if,
if
a
filter
is
put
in
between
the
two
or
a
filter
is
installed
on
a
forward
request
type
that
also
wraps
the
request
for
other
purposes,
then
bang
you
get
the
class
cast
exception.
So
they're
just
it's
it's
a
feature
that
encourages
bad
software
development
and
it's
only
a
one-shot
pony
one
trick
pony.
C
That's
whatever
the
expression
is
you
can't
do
it
multiple
times
if
you've
got
two
different
filters
that
have
two
different
concerns
that
they
wish
to
implement
by
wrapping
a
request
or
a
response
for
a
new
api,
then
you
can't
apply
both
filters
because
one
will
wrap
and
then
the
other
one
will
wrap
over
the
top,
and
when
you
get
to
the
target
servlet
it
can't
cast
down
to
both
of
them,
because
it
it
there's
only
one
on
only
one
wins.
C
So
it's
it's
a
it's
a
mechanism
that
is
bad
software
component
and
it
doesn't
work
if
you
want
to
use
it
more
than
once
as
most
highly
coupled
systems
do.
C
The
other
huge
consequence
of
it
is
that
our
request
objects
that
the
container
implements
to
pass
onto
the
application
now
must
be
mutable
and
the
reason
they
must
be
mutable
is
that
when
we
do
a
request
forward,
we
have
to
change
the
values
returned
by
get
servlet
path,
get
path
info
we've
got
to
change
the
behaviors
of
methods,
like
is
user
enroll
and
a
few
others
in
there,
and
so
we
have
to
change
the
behavior
of
of
request,
object
and,
typically
you
you
do
this
by.
Oh,
I
want
to
change
a
few
methods.
C
I
can
easily
apply
a
forward
wrapper
over
my
request
that
just
changes,
those
methods
to
what
I
want
to
do
and
I
can
pass
it
on
that's
a
very
common
technique
that
a
container
could
use,
but
it
can't
do
that
because
if
the
application
is
passed
in
a
wrapped
request
or
a
specific
type
of
request,
we're
not
allowed
to
wrap
over
the
top
of
that
to
pass
that
on.
So,
therefore,
we
have
to
mutate
the
request
underneath
any
wrappers
that
might
be
there
and
rapid
and
again,
this
is
the
stupid
thing.
C
C
It
turns
out
that
in
servlets,
because
our
request
objects
are
can
be
mutated,
they
become
bigger
and
more
complex
and
because
they're,
bigger
and
more
complex,
they're
more
expensive
to
create,
and
therefore
we
pull
them
and
we
recycle
them,
which
then
brings
its
own
complexities.
C
C
Then
the
asynchronous
thread
when
it's
looking
at
it
is
going
to
see
changing
values
and
it
in
a
race
if
it
asks
for
the
server
pass
and
now
the
second
before
or
afterwards
it
can
get
a
different
answer
back,
and
this
can
make
very
unstable,
asynchronous
code
and
worse.
Yet
it
means
that
we
have
to
do
a
lot
of
copying.
The
asynchronous
code
has
to
copy
out
everything
it
might
need
from
the
request
before
it
starts,
because
those
things
may
change,
as
the
request
goes
out,
there's
performance
in
impacts.
C
You
know,
there's
nothing
like
a
nice
final
immutable
field
for
an
optimizer
to
do
a
good
job
of
optimizing,
your
code
and
anytime.
You
you
have
something
that
can
mutate.
Then
the
optimizer
can't
do
a
good
job.
It's
going
to
have
to.
For
a
start.
It
can't
catch
it
as
well.
It's
going
to
have
to
look
through
deeper
into
its
memory
to
see
through
a
couple
of
memory
barriers
to
see.
Has
it
changed
by
some
other
thread
or
or
you
know,
what's
happening
there?
The
objects
are
more
complex.
C
So
therefore
they
we
pull
them,
so
they
live
longer
and
they
go
into
garbage
collection,
spaces
that
are
held
longer
and
therefore
more
difficult
to
collect.
There
is
definitely
in
java
a
way
you
know
if
you
can
create
an
immutable
object
which
is
short-lived
and
used.
Never
changes,
use
it.
Throw
it
away
is,
can
be
very
beneficial
for
optimization
and
for
garbage
collection
and
that
you
know,
common
approach
is
not
available
to
our
server
container
implementers.
C
Because
of
this
one
little
object
identity
feature
that
I
would
dare
say
nobody
well
very
few
people
use
the
you
know
point
one
percent
of
the
applications
use
oh
and
then,
because
there
are
reasons
that
a
target
servant
b
might
wish
to
know
a
little
bit
about
the
request
before
it
was
forwarded,
but
that
information
about
the
request
before
it
was
forwarded
is
not
available
by
unwrapping
and
looking
at
call
requests.
C
We
quite
often
have
to
add
another
attribute
so
that
the
target
server
b,
if
it
wishes
to
know
where
the
request
came
from,
can
go.
Look
in
these
attributes
and
say:
oh,
I
came
from
this
servant,
forwarded
it
to
me
and
there's
six
of
these
guys
just
for
forward
and
there's
another
six
of
these
guys
for
include
and
just
from
a
performance
point
of
view.
This
means
to
do
a
forward
from
one
server
to
another.
C
You
have
to
set
six
attributes
in
a
map,
mutate
your
request
to
all
the
new
values
and
then
call
the
the
new
server
just
in
case
it
might
want
to
get
any
of
these
values.
C
It
probably
won't,
and
then
it
comes
back
and
then
you
have
to
unset
all
those
values
and
put
them
back
to
where
they
were
just
in
case
any
processing.
After
you
looks
at
these
values
and
they
probably
won't
so
you're
doing
a
lot
of
work
just
in
case
it's
entirely
wasteful,
I
mean
in
jetty.
We
do
a
couple
of
tricks
to
try
and
minimize
the
mutations,
but
it
would
be
so
much
easier
just
to
put
a
wrapper
over
the
top
and
when
we
finish
just
take
the
wrapper
away
and
go
back.
C
C
We
get
our
writer
out
here
and
get
there
because
it
makes
the
lambda
work
a
little
bit
better
and
then
we
start
a
this
lambda
in
a
container
managed
thread
from
the
async
contest
context
and
all
it
does
is
sets
the
status
and
just
prints
out
what
the
server
path
is
and
is
the
user
in
enroll
special
or
not,
and
then
it
completes
it.
So
it's
just
basically
writing
it
out
directly
but
from
another
thread-
and
this
is
in
a
in
a
race
in
two
particular
ways.
C
If
server
this
server
here
was
a
target
of
a
forward,
then
after
we've
started
this
asynchronous
thread
here,
it's
a
race
between
calling
get
server
path
and
exiting
this
do
get
to
see
what
that
request
is
going
to
return.
It
can
return
two
different
values.
It
could
return
this
async
one
or,
if
it's
a
little
slow.
C
When
starting
this
thread,
we
may
see
the
previous
value
of
the
server
path
before
it
was
forwarded,
but
even
without
a
request,
dispatcher
forward,
we're
in
a
race
for
the
second
thing,
we're
printing
here
is
the
user
enroll
special?
C
Well,
this
server
is
going
to
run
as
if
all
users
are
enrolled
special
while
they're
running
this
servlet.
So
that's
an
identity
that
gets
propagated
from
this
servlet
outwards,
but
once
this
original
thread
comes
into
do
get
returns,
we
are
no
longer
in
this
asynchronous
servlet,
that's
special
and
we
go
back
to
whatever
roles
we
had
before.
C
There's
yet
other
problems
with
this
servlet
if
we
go
back
to
considering
request
dispatchers,
if
this
is
a
target
of
an
include,
I
think
it
works,
but
I
really
don't
know
what
it
means
to
to
start
async
and
then
complete
within
the
scope
of
it
include
that's
just
one
of
those
dark
corners
of
the
spec
of
well.
If
the
complete
happens
before
the
include
returns,
does
that
mean
the
caller
can
then
include
another
resource,
or
is
that
complete
final?
I
think
that
completes
final,
which
kind
of
breaks
the
whole
include
paradigm.
C
C
Okay,
I
actually
think
that's
all
I've
prepared
I've
done
a
long
blog,
where
I
argue
my
case
of
things
that
need
to
be
reviewed,
are
removed
in
a
lot
more
detail
and
I'll
just
go
reiterate.
What
we
need
to
spend
some
time
on
now
is
deciding
which
features
we
can
remove
and
then
once
we've
cleaned
the
decks
a
little
bit,
then
it's
time
to
say
which
features
we
need
to
add.
C
That's
the
the
link
to
the
the
blog,
where
I
hopefully
make
my
case
a
little
bit
import,
definitely
in
in
longer
form
than
what
I've
briefly
presented
today
and
a
lot
more
features
that
I
think
can
be
dragged
out.
So
is
any
questions
on
that.
A
C
I
haven't
spent
a
lot
of
time
looking
at
the
various
profiles,
what
we
do
with
jetty
is
that
we
basically
try
and
make
it
a
very
good
software
component
that
it
embeds
very
nicely
with
any
other
components,
be
they
ee
specifications
or
not,
and
so
you
know
you
know
we
we
spent
if
another
open
source
project
be
the
eclipse,
comes
to
us
and
say
we
need
to
integrate
with
you,
you
guys,
you
know
or
we're
having
problems
integrating
with
you
or
working
with
you.
C
They
get
very
high
priority
that
we
will
do
those
integrations.
So
I
I
don't
know
the
exact
suite
of
technologies
that
are
in
micro
profile,
but
I
believe
that
I
believe
that
we
support
a
lot
of
them.
I
I
think
I
think,
there's
well.
C
The
current
debate,
I
believe,
is
talking
about
whether
or
not
the
server
api
api
itself
will
surface
in
the
micro
profile
and
maybe
it'll
just
be
restful
apis
and
things
like
that,
and
we
certainly
can
be
embedded
in
in
implementations.
That
way.
One
of
the
the
good
features
of
jetty
is
it
is
so
embeddable.
C
The
the
put
down
I
always
like
to
do
on
tomcat
is
that
you
can
embed
tomcat,
but
there's
always
a
tail
sticking
out
the
back
and
a
couple
of
ears
sticking
out
the
top
and
you're
there's
tomcat
in
there,
but
when
you're
in
bed
jetty,
it
can
be
very,
you
know
done
so
it's
it's
very,
very
difficult
to
know.
You
know
our
best.
C
Embedding,
I
guess
is
google
app
engine
they
picked
us
a
long
time
ago
and
basically
so
they
could
completely
hide
the
fact
that
they
were
running
jetty
and
replace
a
lot
of
our
components
with
their
components
and
so
yeah,
ultimately
we're
an
http
implementation,
and
so,
if
you
want
to
put
together
a
micro
profile
suite
of
the
technologies,
you
want
there's
no
reason
that
http
can't
be
provided
by
jetty
if
it's
servicing
servlets
our
server
implementation
is
there.
C
If
it's
going
directly
to
other
technologies,
then
we're
very
keen
to
integrate
with
those,
and
we
haven't
had
any
problems
with
that.
So.
A
Great
thank
you
at
this
point.
There's
no
other
questions,
but
what
I
will
say
is
you've
always
obviously
covered
a
lot
here.
So
if
people
have
questions
feel
free
to
ask
them
in
the
chat
in
the
next
couple
of
minutes,
just
as
I
wrap
things
up,
I
will
provide
a
couple
of
news
items
and
then,
if
there
are
any
more
questions,
we
can
certainly
address
them
so
yeah.
Thank
you.
C
The
one
other
thing
I'll
say
is
part
is
one
of
the
pictures
I
was
making
here
is
about
the
process
and
the
process
does
welcome
any
developers
to
come
along
to
the
mailing
lists
and
the
issue
trackers
on
github
for
the
servlet
api.
So
if
you've
got
ideas
of
what
you
know
should
be
in
or
out
of
the
server
api
by
all
means
join
that
discussion
there,
it's
open.
If
you
go,
there
start
talking
to
us
you're
part
of
the
process.
A
Absolutely
so
yeah
keep
keep
the
questions
coming
if
there
are
any
the
next
jakarta
tech
talk
happens
on
june
15th,
we
have
joining
us
reza,
ramen
and
graham
charters
to
present
powering
java
on
azure
with
open
liberty
and
open
shift,
and
you
can
register
for
that
session
if
you're
interested,
using
the
link
that
I
just
posted
in
the
chat,
as
always
we're
also
looking
for
more
jakarta
tech
talk
presentations.
A
So
if
you
have
a
topic
that
you
think
might
be
of
interest
to
this
audience
feel
free
to
register
using
the
link,
I
again
posted
in
the
chat
and
finally,
we
would
love
to
get
your
feedback
on
this
session.
So
if
you
would
not
mind
filling
out
the
post
event
survey,
that
would
help
us
continue
to
make
the
jakarta
tech
talk
series
improved
in
the
future.
A
C
We've
have
run
on
grail
a
couple
of
times
in
the
early
days.
There
is
probably
there's
probably
more.
We
need
to
do
to
really
embrace
grail,
but
we're
a
little
bit
demand
driven
on
that
and-
and
we
haven't
actually
had
a
lot
of
people
asking
for
it,
but
it
is
something
we've
certainly
looked
at.
We
we
like
being
able
to
start
up
very
quickly
and
so
the
the
pre-compiling
nature
of
it
and
various
things
are
attractive
to
us.
C
We
did
have
a
couple
of
clients
that
were
very
interested
in
and
evaluated
it
for
those
sort
of
reasons
and
then
chose
not
to
so
that's
why
we
started
looking
at
and
did
some
initial
runs.
Yeah
again,
I
can't
definitively
answer
it's
something
we
have
looked
at.
We
didn't
find
any
huge
hurdles,
but
I
believe
it's
it's
kind
of
like
the
the
n
squared
feature
problem
is
that
you
know
the
jpms
stuff
that
we've
added
in
lately,
and
things
like
that
can
make
things
like
braille,
difficult.
C
I'm
pretty
sure
we
can
strip
ourselves
down
to
a
a
nice
tight
core
of
of
hdp
and
servlets
and
have
that
work
well
on
grail,
whether
our
full
suite
of
modules
and
features
would
work
out
of
the
box.
I
don't
think
so,
but
again
it's
it's
one
of
those
things
we're
demand
driven.
If
there's
people
saying
you
know,
we
really
want
braille,
and
these
are
the
reasons
then
please,
you
know,
come
to
the
jetty
mailing
lists
if
it's
a
jetty,
specific
thing
or
or
to
the
server
api
ones.
C
If
it's
server
features
to
integrate
and
we'll
certainly
have
a
look
at
that.