►
From YouTube: Jakarta EE TCK Community Outreach Call Oct 9, 2020
Description
This call welcomes everyone interested in join and contributes to Jakarta EE TCK.
The call provided two live code demos showing how to run a set of tests and also how standalone TCKs already migrated the TCK architecture from JT Harness into arquillian and JUnit.
► Project website with links to the repository and the jakartaee-tck-dev mailing list: https://projects.eclipse.org/projects/ee4j.jakartaee-tck/developer
Find out more about Jakarta EE and follow:
Website: https://jakarta.ee
Twitter: https://twitter.com/JakartaEE
Facebook: https://www.facebook.com/JakartaEE
Linkedin: https://www.linkedin.com/groups/13597511/
A
We
have
plenty
of
items
in
our
agenda,
so,
in
sake
of
you
know,
keeping
the
timing
of
the
call
we
welcome
to
everybody
who
is
joining
and
feel
free
to
yeah.
To
to
add
your
your
name
in
the
attendees-
and
we
have
is
andy
here
on
the
call
terrorists.
B
C
B
C
Okay,
so
my
screen
is
yes
here
and
I
would
like
to
like
start
from
the
beginning
so
to
run
tck.
You
have
to
accomplish
some
prerequisites
and
I
have
a
list
of
required
stuff
here.
So
the
first
thing
is
jdk
1.8,
because
gck
gcks
are
now
run
on
only
on
this
jdk,
so
it's
not
possible
to
run
it's
only
11
or
something
and
you
can
download
it
from
oracle
or
from
opengdk.
I
don't
know
any
anyone.
C
Any
version
is
suitable
and
it's
required,
because
ant
is
essential
for
this
recently
we
had
version
1109,
which
is
the
newest
from
from
10th
versions
and
furthermore,
later
older
versions
are
not
available.
So
if
someone
uses
1108,
for
example,
it
shall
be
corrected
to
1.99
and
glass
fish
6.
The
last
56
can
be
downloaded
from
eclipse
downloads.
C
C
C
So
there
is
a
list
of
tck
bundles
here
and
I
would
like
to
show
how
to
use
it
on
local
environment.
On
my
computer
personally,
and
I
will
show
how
to
do
this,
you
have
to
run
you
have
to
open
shell
it,
but
for
those
who
use
who
uses
windows,
I
think
it
will
be
some
unix-like
shell,
but
I
use
directly
so
for
me
it's
like
native,
and
we
will
create
tests
for
json
b.
C
C
C
C
Is
intended
to
be
run
a
standalone
job
or
task
or
just
locally,
but
it's
not
yet
decartified.
So
here
you
can
see
some
passes
which
are
java
x
and
it's
not
correct.
For
now
it
shall
be
jakarta
and
you
can
you
can
compare
this
one
json
btck
with
json
btck,
two
levels
above
here.
It's
it's
more
complicated
script,
but
this
script
is
is
for
for
like
jobs
on
jenkins
and
it's
not
intended
to
be
used
like
locally,
because
it's
more
complicated
it
has
parameters
and
it
has
some
predefined
or
values
from
external
external
scripts,
etc.
C
C
C
C
Actually,
yes,
if
someone
is
interested
it,
this
person
can
submit
pull
requests
and
it
can
be
fixed
because
in
my
opinion,
it
shall
be
fixed,
but
the
cts
team
doesn't
have
enough
time
for
sure
to
parse
and
fix
all
stuff
and
those
are
not
important
for
them,
because
they
are
only
for
some
standalone
local
jobs
which
can
be
done
by
some
other
team
or
some
other
person.
So
it's
for
sure
it's
can
be
fixed
and
requests
can
be
submitted.
It's
not
it's
like
it's
very
preferable.
C
Actually,
I
was
going
to
do
this,
but
I
don't
have
time.
I
don't
have
time
to
do
this
so
as
well,
so
here
we
can
yeah
and
for
for
faster,
faster
performance.
C
C
C
C
One
yeah,
and
as
soon
as
you're
doing
this
locally,
you
can
basically
do
some.
Some
hacks
like
not
because
I
was
used
to
do-
is
to
do
this
like
job
in
jenkins
and
in
jenkins.
It's
not
possible
to
unzip
it
manually
or
download
something
manually.
You
have
to
put
everything
unscript
and
sure
for
local
usage.
You
can
just
put
it
inside
script
and
enjoy.
C
C
C
B
Hold
on
a
second,
I
have
a
question
maxine,
so
you're
telling
that
we
have
more
than
35
I'm
about
46,
well
35
for
jakarta,
e9
specifications
and
to
run
the
tck
against
all
of
them.
We
will
have
to
run
the
tck
for
glass,
fish.
C
For
all
tcks
you
can
use
glass,
fish
or
tomcat,
but
for
for
standalone
jobs,
it's
used
or
for
steadily
scripts.
Basically,
glass
fish
is
used
because
glassfish
contains
all
required
apis
and
implementations,
and
you
don't
have
to
do
basically
anything
you
just
download
glassfish
with
all
current
implementations
and
apis
inside
so
it's
built
in
they
are.
I
will
show
you
they
are
here.
E
C
C
Testing,
what
are
your
testing
is
strictly
strictly
limited
to
tck
bundle
when
you
download
one
bundle
you
choose
which
exact,
which
exact
api
or
implementation
you
will
test
and
in
our
case
yeah.
In
our
case,
we
we
had
yeah
by
the
way
this
guy
is
already
running
here,
but
it's
failing
because
we
had
no
glass
fish.
C
And
I
will
clean
it
and
start
again,
probably
so
here
we
have
glass,
fish
and
json
btck
and
json
btck
says
that
we
will
test
only
api
for
json
b
and
implementation
for
json
b.
That's
it.
We
cannot
test
anymore,
for
this
bundle
and
and
glassfish
actually
provides
those
api
and
implementation,
and
so
this
is
conjunction.
B
This
is
for
another
conversation
and
it
needs
to
be
hopefully
in
the
specification
committee,
but
we
have
a
problem
because
when
we
release
jakarta
e8-
and
we
saw
the
activity
because
the
the
glassfest
runs
and
is
makes
everything
else,
dependable,
all
the
other
apis
get
hidden
behind
it.
In
all
the
contributions
we
either
we
need
to
remove.
We
need
to
put
glass
fish,
and
everyone
knows
this
outside
of
what
is
jakarta,
because
that's.
B
It's
unrelated
what
I'm
saying
here
is
what
you're
showing
us
you're
showing
us
something.
That
is
what
we
need
to
do
is
say
it's
not
that
you
need
to
contribute
to
glass
fish.
It
is
already
done.
It
is
easy,
you
just
choose
it
run
it
you
don't
have
to
worry
about.
Finding
bags.
Is
that
correct,
maxine.
B
And
the
tck,
like
you
know,
sometimes
when
you
run
something
and
you
find
tax
tests
within
the
test,
you
need
to
go
and
fix
that,
and
we
are
not
doing
that.
We're
just
focusing
on
this
is
to
enable
you
to
run
the
tck
and
not
have
to
worry
about
anything
else.
But
what
you,
what
the
code,
the
code
that
you
want
to
test
right.
This
will
tell.
B
A
That
that
was
going
to
be
a
part
of
the
feedback
forwards.
Last
question
because
maxine,
as
you
explained
here,
you
are,
you
are
using
this
ctc
file
that
contains
implementation,
because
that
that
that
comes
with
glasses
right.
So,
if
you
have
another
implementation,
you
will
need
to
provide
that.
C
F
C
And
the
issue
is
that,
usually,
you
would
like
to
provide
some
new
integration,
a
new
implementation
for
for
tck
to
be
tested.
So
you
just
want
to
test
your
changes
inside
your
implementation
and
if
this
situation
occurs,
the
point
is
that
here
in
glass
fish,
you
have
those
models,
those
modules
like
listed,
and
you
have
to
replace
this
one
jar,
this
one
jar
of
your
choice.
C
D
C
C
E
Maxim
could
you
just
maybe
open
up
another
tab
here
and
show
people
where
to
get
the
domain
file
the
the
domain
log?
Just
so
they?
If,
because
there's
the
script
output
and
there's
also
the
the
glass
fish
domain
log
that
I
think.
C
E
So
so
is
the
the
log
file
that
you.
D
C
C
C
C
G
D
C
G
Multiple
tests
that
there
should
be
jtr
files
in
the
in
the
work
in
the
work,
folder,
calm
under
com-
I
think
so
yeah.
C
G
D
G
On
this
screen
is
probably
like
the
test
folder,
which
is
interesting,
so
I
don't
see
it
yet,
but
it's
just
you
know
when
you're
looking
at
a
failure
in
a
jtf
file,
it's
it
it's
good
to
just
you
know
to
note
the
oh
there.
It
is
work
out
it's
in
work.
I
guess
it's.
You
know
noting
the
path
of
where
the
test
failure
was,
or
you
know,
I
guess
the
test
equals
in
this
case.
It's
interesting.
So
it's
just.
G
If
you
go
back
to
that
source,
folder
directly
and
run
from
there,
it's
only
going
to
run
the
test
under
the
noted
folder.
So
if
we
do
hit
an
error
you're
trying
to
fix
it,
maybe
you
only
run
that
particular
test
folder.
When
you
subsequently
run
it
again,
so
you
don't
have
to
run
for
15
minutes
just
to
see
if
your
code
change,
you
know
is
addressing
it.
A
Yeah
what
I
found,
for
example,
I
was
testing
a
little
bit
of
java
mail
that
sometimes
the
scenarios
are
prerequisites
are
are
sequential,
they
are
not
independent.
I
don't
know
if
that
is
common
in
all
the
tcks
on
the
standalone
tck
case,
but
at
least
what
I
have
found
on
on
any
particular
issue
I
was
seeing
on
java
ml.
Is
that
if
you
don't
execute
test
a
b,
you
cannot
expect
a
good
result
from
test
c,
for
example.
A
F
C
B
E
F
Yes,
we
can
okay,
so
dimitri
invited
me
to
give
a
quick
demo
here
so
I'll
just
start
off
with
the
discussion.
I
proposed
this
idea
a
couple
months
back
on
the
tc.
F
I
think
it
was
either
the
java
ee
dev
mailing
list
or
the
maybe
it
was
a
tck
mailing
list,
but
but
essentially
the
idea
that
I
was
tossing
around
was
to
break
out
the
tck
tests
from
the
kind
of
the
monolithic
tck
repo
that
maxine
was
just
showing
and
make
these
things
more
standardized
and
more
easily
runnable
and
consumable
for
for
new
folks.
F
So
for
there
there's
quite
a
few
people
on
the
call
that
are
familiar
with
micro
profile,
and
this
is
basically
just
adopting
the
same
exact
pattern
that
micro
profile
uses
for
their
tcks,
but
just
applying
that
pattern
to
some
of
the
the
jakarta
ee
specs.
F
So
if
you're
not
familiar
with
micro
profile,
the
way
micro
playful
does
their
tcks
I'll
just
run
through
it
very
quickly.
So
I've
done
I've
kind
of
taken
json
b
as
a
as
a
test
subject
here.
So
I'll
show
all
this
in
in
the
context
of
json
b,
but
it
could
be
applied
to
any
of
the
specs
really
so
right
now
we're
looking
at
the
json
b
api
repo
and
we've
got.
You
know
the
api,
you
know
interfaces
here,
we've
got
the
specs
and
the
part
that
I've
added
is
actually
a
tck
folder.
F
So
here
the
json
b
api
repo
owns
its
own
tck
tests,
and
these
are,
I
actually
copied
the
source
code
from
the
json
b
tck
tests
in
in
the
repo
that
maxine
was
just
showing
and
I've
copied
them
into
here.
So
the
the
thing
I
like
about
this
style
is
that
it
keeps
the
tcks
and
the
api
in
one
spot,
so
that
david,
I
don't
know
if
you're
trying
to
talk
on
this
call
or
a
different
call,
but
okay,
he
must
be
trying
to
talk
on
a
different
call.
F
So
the
thing
I
like
about
having
the
tcks
here
is
that
if
we
make
a
new
spec
enhancement,
you
know
so
adding
a
new
feature.
We
can
add
the
we
can
have
the
api
changes,
the
spec
doc
changes
and
the
new
tck
tests.
All
all
in
one
single
pr,
so
they
can
all
be
reviewed
as
a
cohesive
unit
and
also,
I
think,
it's
really
helpful
to
view
the
new
tck
tests,
along
with
a
new
spec
change,
because
it
really
helps
show
like
a
working
example
of
what
that
new
function
is
going
to
do
so.
F
The
idea
with
the
tcks
here
is
it's
just
the
java
classes
and
they
will
actually
be
published
on
maven
central
as
an
artifact
themselves,
so
they
can
be
published
as
a
binary,
and
the
cool
thing
here
is
that
any
implementation
can
then
consume
the
tck
test
as
a
standard
maven
dependency.
So
here
I'm
this
tab.
I'm
in
now
is
yasen.
So
yasen
is
formerly
the
reference
implementation
for
for
json
b,
so
yasen
doesn't
care
about
any
and
most
of
the
jakarta
ee
spec
implementations
can
run
standalone.
F
They
don't
necessarily
have
a
particular
allegiance
to
a
specific
app
server.
They
can
be
run
inside
an
app
server,
but
they
can
also
be
run
standalone.
So
with
yas
and
that's
the
case.
You
can
run
yes
and
stand
alone
in
just
a
regular
javasc
program
or
you
can,
you
know,
consume
it
into
glass,
fish
or
open
liberty
or
whatever,
and
use
it
in
a
similar
way,
but
it
for
yasen
and
developers
of
yasen.
We
want
to
make
sure
that
the
code
stays
you
know
compliant
with
the
tck.
F
So
what
I've
done
here
is
I've
added
a
yas
and
tck
subfolder
and
all
we
need
to
do
in
order
to
run
the
tck
with
the
acid
is
just
add
a
pom
file
here
and
it's
just
pulling
in
the
json
b
tck
as
a
maven
dependency
and
then
also
I'm
pulling
in
the
arkilian
weld
embedded
container
because
yes,
and
has
a
couple
integrations
with
cdi.
F
So
this
is
just
running
essentially
json,
b
and
cdi
in
a
standalone
javasc
container.
So,
oh
and
I'll
also
mention
that
we
have
for
yassin.
We
have
a
travis
ci
pipeline,
where
every
single
change
that
we
put
push
through
the
yasen
goes
through
this
this
pipeline.
So
we
have,
you
know,
check
style
and
copyright
checks
up
front,
and
then
we
actually
compile
yasin
and
run
all
of
our
unit
tests.
F
But
then,
additionally,
after
this,
we
actually
run
the
json
btck
against
these
new,
this
new
copy
of
tests
and
that
only
takes
49
seconds
to
run
the
entire
json
btck.
And
so
it's
it's
very
fast.
I
think
what,
with
the
with
the
jakarta
ee
tck,
mono
repo,
it
takes,
like,
I
think
max
team
said
like
25
minutes
or
something
to
run
the
json
b
tests.
So
it's
considerably
faster.
F
So
now
I'll
switch
over
to
eclipse
here
and
just
give
you
guys
a
look
at
the
involved
code.
So
the
json
btck
tests
are
the
same
thing.
That's
in
the
jakarta
eetck
repo
except
I've,
translated
them
to
use
junit
and
arcelian
instead
of
the
custom
java
test,
harness
that
the
tck
typically
uses.
So
like
I've,
just
added
you
know
at
test
annotations
and
then
I've
just
changed
some
of
the
assertions
to
like
the
junit
assertions
or
or
fails,
and
things
like
that.
F
So
then,
if
I
switch
over
to
yes
and
like
yes
and
has
our
normal
source
and
our
normal
unit
test,
but
also
in
this
in
this
yes
and
tck
subfolder,
we
have
that
same
palm
that
I
was
showing.
So
if
I
want
to
run
all
of
the
json
btck
with
yassin,
what
I
can
do
here
is,
I
just
go.
I
just
all
I've
done
is
I've
cloned
the
repo
I
have
maven
installed?
F
I
have
a
java
installed,
which
is
very,
is
very
standard
for
a
typical
java,
open
source
developer
to
have
installed
already,
and
I
can
just
run
maven
verify
and
then
that
will
initiate
a
maven
build
and
it
will
run
all
of
the
yasen
tests
as
standalone,
and
this
is
just
a
this
is
just
a
basic.
I
think
it's
maven
failsafe,
maybe
it's
sure
fire.
I
always
get
those
two
mixed
up,
but
it's
a
stan.
F
It's
the
standard,
maven
way
of
running
tests
so
and
I
think
in
general,
open
source
developers
are
very
familiar
with
this,
so
they
they
know
how
to
do.
Maven
test
may
even
verify
and
say
I
want
to
run
just
this
json
b
config
test.
I
can
the
standard
way
to
do
that
with
maven
is.
I
can
do
dash
d
test,
equals
json
b
config
test,
and
then
it
will
only
run
that
particular
test
class
and
then
it
will
complete.
F
There
are
standard
junit
reports
available
for
this,
so
in
the
target
folder
we
just
get
a
junit
report
that
gives
us.
You
know
the
overall
test,
the
success
rate
and
then
I
can
click
into
any
of
these
and
get
a
more
detailed
breakdown
of
what
the
test
results
look
like,
and
if
there
were
any
failures,
then
I
believe
it
also
captures
the
you
know,
standard
out
in
standard
error,
streams
and
and
none
I've.
I
didn't
invent
any
of
this
tech
this
stuff.
F
So
that's
pretty
much
the
gist
of
the
the
proposal.
So
right
now
there
are
actually
two
copies
of
the
json
btck
floating
around.
There's
the
official
one
in
the
the
jakarta
ee,
tck,
monorepo
and
then
I've
kind
of
mirrored
all
of
those
tck
tests
in
the
jsonb
api
repo
itself,
and
my
hope
is
that,
eventually
you
know
maybe
in
jakarta
e10
we
can,
you
know,
only
have
one
copy
of
the
json
b
tck
tests
and
then
you
know,
sort
of
carve
those
out
and
just
have
them
be
owned
by
the
json
b.
F
Api
repo,
for
the
reasons
I
mentioned
earlier
and
then
eventually
other
specs
could
potentially
follow
suit
if,
if
they
wanted
to
so
I'll,
also
mention
that
the
cdi
specs
and
the
bean
validation
specs
are
already
doing
their
tcks
exactly
like
this,
they
already
use.
You
know
these
binary,
tck
artifacts
that
are
run
with
maven
and
our
kilian.
B
I
mean,
when
you
say
a
few
other
apis
are
using
this.
Why
is
that
that
it
has
not
come
to
the
tck
forum
so
that
maybe
those
that
are
not
yet
there
and
have
missed
this
could
join
the
effort
because
simplifying
the
test
time
and
also
keeping
everything
under
one
area,
it's
just
a
huge
win.
Are
we
doing
enough
to?
B
F
Think
yeah,
I
I
think
the
decision
needs
to
be.
It
needs
to
it's
a
decision
that
needs
to
be
made
on
a
per
spec
basis.
So
I
guess
once
json,
b
kind
of
gets
permission
to
sort
of
blaze.
This
trail
of
transitioning
from
the
monorepo
to
standalone
tcks.
Then
at
least
that
will
set
the
pattern
up
for
other
specs
that
choose
to
follow
suit.
F
You
know
added
to
and
worked
on
in
the
future,
but
for
relatively
younger
specs,
like
jsonb,
that
I
think
still
have
a
lot
of
features
ahead
of
them.
Then
it
would
be.
It
would
be
wise
for
those
specs
to
to
go
through
the
effort
and
that's
what
I've
done
as
a
as
an
expert
group
member
on
the
json
b,
spec,
just
to
kind
of
make
my
life
easier,
I've
tried
to
do
this.
A
F
You're
super
quiet,
scott
go
ahead.
B
G
I'm
turning
up
my
oh
it's
funny.
It
just
turned
down
on
its
own.
It's
saying
I
did
it
too
high.
I
don't
know,
listen.
So
let
me
try
and
ask
my
question
andy.
G
What
would
you
think
about
moving
some
platform
level
testing
into
json
b?
Let's
just
say
it's
a
cross
blend
of
different
containers,
which
would
mean
that
I
guess
in
s
e
test
and
mode,
you
wouldn't
be
able
to
run
the
x.
You
know
the
container
level
ones
and
it
would
be
a
lot
more
complicated,
but
you
know
in
that
you'd
kind
of
need,
a
container
that
implements
kind
of
like
the
platform
level
testing.
But
this
is
you
know
this
is
like
a
platform
level
question
that
has
come
up.
G
You
know
when
we
were
discussing
on
the
mailing
list
before
and
I'm
just
curious
what
you
would
think
about
if
those
tests
wanted
to
be
moved
or
someone
wanted
to
move
or
the
platform
wanted
to
move
other
test
in
what
had
like
I
mean.
I
know
our
killian
supports
that
that's
no
problem,
but
as
a
general
thing,
that's
it's
a
lot
more
complexity
and
there's
a
lot
more
chicken
and
egg.
F
Yeah
so
yep,
so
what
I,
what
I
would
suggest
there
is
that
usually
we
have
usually
we
don't
have
tests
that
are
involving
every
single
spec
in
the
platform.
Usually
it's
usually
it's
two
specs
interoperating
with
each
other.
Usually
you
know
very
frequently
it's
like
cdi
plus
something
else.
F
Sometimes
we
have
interactions
between
three
specs
in
a
single
test
case,
but
I
think
very
pretty
rarely
more
than
three
specs
getting
involved
in
a
single
test
case
and
I
think
for
that
ultimately,
the
way
I
see
it
and
the
way
we
do
it
in
open
liberty
is,
is
we
try
to
have
no
tests
in
no
man's
land?
And
what
I
mean
by
that
is,
you
know
really:
no
one
designs,
api
and
spec.
Well,
I
guess
people
design
spec
at
the
platform
level,
but
no
one
works
on
api
at
the
platform
level.
F
So
if
there
is
a
test
case
or
some
interaction
that
involves
two
or
more
specs,
we
should
just
move
that
test
to
one
of
the
involved:
specs
repos
and
then
for
the
in
the
case
where
you
can
test,
you
know,
still
have
a
compliant
implementation
without
the
other
spec
involved,
then
we
can
have
conditional
tests
where
they
say-
and
we
do
this
in
micro
profile
as
well,
where
we
say
where
we
actually
do
have.
You
know
truly
optional
specs,
where
we
say:
okay
run
this
test.
F
If
the
api
for
this
other
integrating
technology
is
available,
but
if
that
other
integrating
technology
is
not
available,
then
we'll
just
skip
the
test.
So
to
give
a
concrete
example,
there
are
some
json
btck
tests
that
interoperate
with
cdi,
but
it's
perfectly
valid
to
use
json
b
without
cdi.
So
what
we've
done?
F
What
we
can
do
there
is,
you
know,
check
for
the
presence
of
a
cdi
implementation
on
the
class
path
in
all
the
test
cases
that
involve
json,
b
plus
cdi,
and
then
you
know,
if
cdi
is
not
present,
then
we
just
simply
skip
those
test
cases,
but
if
cdi
is
available,
then
we
would
run
those
test
cases.
G
That
makes
total
sense,
so
we're
open
to
addition,
like,
let's
say
for
ee10
to
it,
say
some
a
few
other
specs
tests
at
the
platform
level
moving
in
and
that
that
that's
kind
of
like
a
fundamental
question.
I've
had
because
we
have
like
what
like
four
million
lines
of
text
or
whatever
in
the
platform
tck-
and
you
know
we
have
a
lot
of
you
know-
maybe
there's
two
million
lines
of
actual
java
source
sources.
We
don't
want
to
lose
any
testing
and
maybe
not
all
all
specs
will
want
to
do
this.
G
G
E
So,
scott,
I
I
think
most
of
that
has
to
really
kind
of
be
derived
from
what
does
the
spec
say?
What
are
the
one
of
those
requirements
are
listed
in
the
tck
user
guide
right?
If,
if
those
requirements
need
you
know,
if
they
call
for
different
options,
you
know
optional
features
or
running
in
different
containers,
then
that
tck
should
support
that
in
many
cases
with
the
platform
tck
it's
the
platform
that
has
those
requirements
in
it,
and
that's
where
those
are.
E
You
know
that's
where
those
different
modes
come
in
and
that's
why
sometimes
like,
for
example,
with
jms
you'll,
see
different
numbers
between
the
gms
standalone
pck,
as
opposed
to
what's
in
the
jms
vehicle
in
the
platform
test,
and
I
don't
expect
that
to
change
right.
E
What
we
need
to
do
is
come
up
with
a
way
that
we
can
keep
the
core
of
the
the
standalone
tcks
associated
with
the
apis,
just
like
we've
seen
here
and
then
have
a
way
to
import
that
into
into
the
platform
pck
and
then
the
platform
keep
the
k
needs
to
add.
Whatever
its
requirements
are
and
then
run
that
test
in
addition
to
the
standalone
test,
so
in
that
way,
each
one
of
them
is
separated,
and
you
know
you
you
can
you
I
think
the
goal
should
be.
E
You
can
always
just
look
at
the
teasing
at
the
spec
and
look
at
the
tck
and
grok
everything
that
you're
going
to
need
to
do
based
on
those
documents
and
that
you
know
those
those
tests
right
at
the
spec.
G
G
D
D
For
example,
cdi
has
tests
for
the
ejb
integration
that
are
optional
from
the
cdi
perspective
from
the
ejb
perspective
cdi,
reintegration
is
a
requirement.
D
So,
from
the
ejb
perspective,
all
those
optional
tests
are
all
required
tests
now,
based
upon
that
alone,
and
you
would
think
okay
well,
if
they're,
if
they're
required
for
ejb
and
optional
for
cdi,
we
should
move
all
those
tests
from
cdi
into
ejb
problem
with.
That
is
the
specification
that
contains
the
requirements
and
how
the
integration
should
work
is
the
cdi
specification.
D
F
D
F
Yeah,
it
kind
of
cdi
kind
of
defines
a
framework
for
how
to
interoperate
with
a
bunch
of
things,
and
it
should
it
should
be
responsible
for
testing
sure
that
that
framework
works
properly.
But
going
back
to
the
example
of
cdi
plus
json
b,
I
mean
when
you
look
at
a
scenario
like
that.
Most
of
the
you
know,
if
you
just
look
at
the
testing
scenario,
most
of
the
ownership
kind
of
falls
under
json
b
to
get
that
interaction
right.
Assuming
that
you
know
cdi,
isn't
fundamentally
busted
and
cdi
extensions.
F
You
know
work
in
a
generic
sense.
All
the
ownership
falls
in
in
json
b,
so
I
think
it
can
kind
of
just
be
evaluated
on
a
case-by-case
basis.
In
my
opinion,
but
moving
away
from,
I
think
it's
important
to
move
away
from
having
tests
landing
in
no
man's
land
at
the
at
the
platform
level,
because
then
it's
it's
unclear
to
the
spec
authors
who
who's
responsible
for
maintaining
those
tests.
F
If
they
make
a
spec
change-
and
I
think
frankly,
they
just
get
forgotten
about
until
the
last
minute,
because
you
know
with
with
these
standalone
tcks,
I
I
can
verify
that
the
json
btck
works
with
every
single
change
I
make
to
the
json
b
api
and
a
json
b
implementation,
but
you
know
we're
not
going
to
be
able
to
run
the
full
platform
level
tests
on
every
single
change
like
that.
B
H
Yeah,
I
think
the
I
started.
I
just
started
figuring
out
how
this
whole
thing
works,
so
I
tested
it
out
with
the
jpa,
because
the
jpa
was
one
of
the
specs
that
I
had
some
experience
with
earlier.
In
the
sense
I've
used
gpa
in
the
past.
So
that's
what
I
tried
it
out
with,
and
I
so
I
think
this
discussion
is
where,
for
example,
the
tests
for
each
spec
should
live.
H
I
wasn't
really
sure
I
basically
pulled
it
out
from
the
I
looked
at
the
instructions
for
the
jpa
tck,
so
I
think
this
discussion
that
we've
had
in
the
last
few
minutes
is
basically
about
where
the
tests
should
live
and
should
be
maintained,
whether
with
the
individual
spec
or
at
a
centralized
level.
H
So
if
that's
what
I
understood
correctly,
okay,
so
yeah
I
mean
I
didn't.
I
think
I'm
probably
I'm
not
really
sure
about
all
the
things
that
are
involved
right
now,
because
I've
only
just
started
doing
it
because,
but
I
think
from
what
I've
understood
is
because
of
all
these
dependencies
and
interdependencies
involved,
probably
makes
more
sense
to
have
each
like
get
involved
with
the
maintenance
of
their
own
tck.
H
D
Yeah
the
status
is
you
that
that
that
jpa
test
suite
that
you
ran
was
actually
generated
out
of
the
big
platform,
tck,
okay
and
so
the
actual
source
for
it
doesn't
live
in
the
jpa
project.
So
we're
able
to
produce
a
tck
that
looks
like
a
standalone
tck.
I
mean
it
is
a
standalone
tck,
but
only
in
binary
form
really.
H
D
And
so
you
know
what
andy
did
was
basically
figure
out
how
to
move
the
actual
source
out
of
the
platform
tck
into
the
individual
specs.
So
that
way
the
standalone
tck
produced
is
coming
from
source.
That's
under
the
control
of
the
json
b
project,
which
is
really
fantastic.
So
there's
a
little
bit
of
legacy
there.
D
H
So
when
we
so
when
we
say
platform,
sorry
just
a
quick
question
so
when
we
say
platform
tck,
so
what
does
what
does
platform
historically
mean
in
context?
So
what
what?
What
is
supposedly
supposedly
traditionally
belonging
to
platform
versus,
say,
belonging
to
a
certain
spec.
D
The
the
platform
refers
to
the
kind
of
the
umbrella
java,
ee
or
jakarta
ee
specification
itself.
So
there's
the
there's
the
jakarta
ee
full
platform.
It's
not
never
really
actually
been
called
full,
but
it's
basically
the
jacardi
platform.
Then
there's
jacardi
web
profile
specifications.
H
H
D
And
and
all
the
tests
have
generally
lived
there
for
convenience,
I
guess,
but
you
know,
however,
the
decision
made
was
made
a
long
time
ago
and
just
continued,
and
so
we
end
up
with
one
big
pile
one
big
code
base
where
all
the
tests
live
and
then
the
the
specifications
and
api
apis
are
all
sort
of
separate,
and
so
after
the
donation,
everyone
had
the
same
thought
we
needed
to
break
up
that
big
source
thing
of
one.
You
know
that
one
big
mono
repo
as
andy's
been
calling
it.
D
You
know
we
need
to
break
that
up
into
smaller
bits
and
move
the
tests
out
into
the
specification
projects
that
actually
own
that,
so
that
that
is
a
very
major
amount
of
work
and
that's
that's
what
was
being
presented
and-
and
so
you
have
some
experience,
so
you
know
because
you
attended
this
call.
You
now
know
why
that
that
that
jpa
tck
looks
like
a
standalone
tck,
but
actually
isn't.
Quite
yet.
A
I
I
have
a
one
quick
question
for
andy
and
how
was
the
migration
from
from
from
the
jt
harness
into
jet
unit
and
killing
approach?
Do?
Was
that
be
a
I'm,
even
plugging
or
something
to
automate
all
the
tests,
or
it
was
more
more
manually.
F
Yeah,
it
was,
I
would
say,
the
whole
thing
took
me
about
an
hour.
I
didn't
do
any
fancy,
plug-ins
or
tooling.
I
basically
just
copy
pasted
it
all
the
code
out
into
like
kind
of
a
new
skeleton
maven
project,
and
then
it
was
basically
a
bunch
of
bulk
find
replace
options.
So
you
know
at
it.
You
know
you
know,
searching
for
all
test
methods
that
start
with
public
void
test
and
then
just
slapping
an
at
test
annotation
in
front
of
them.
F
You
know
changing
some
of
the
assertions
with
find
replace
kind
of
yeah,
just
lots
of
find
replace
operations.
I
would
say,
but
I'm
certainly
aware
that
json
b
is
one
of
probably
the
the
specs
that
is
most
conducive
to
this
kind
of
of
migration
and
I'm
not
implying
that
every
spec
will
will
be
this
easy,
but
I
think,
like
jsonp
would
be
a
great
candidate
for
it,
and
probably
some
of
the
newer
specs
like
like
the
jakarta
nosql
one.
F
If
that's,
if
that's
coming
around,
I'm
not
sure
what
they're
doing
for
tcks,
but
it
can
be
a
good
candidate
for
some
specs
for
sure.
F
D
Well,
yeah,
I
was
just
you
know,
maybe
because
there's
a
lot
of
people
who
would
want
to
do
this
and
we
have
them
all
start
from
scratch
for
with
nothing
or,
if
possible,
at
some
point
in
the
next
coming
weeks,
you
could
carve
out
an
hour
to
rem,
to
pull
them
both
back
up
and
remember
what
you
know
just
compare
them
right
and
left
and
go
oh
yeah.
I
had
to
change
this.
I
had
to
change
this
and
just
kind
of
shoot
out
a
small
list.
F
Okay,
yeah
I'll
I'll
wait
to
do
that
until
another
spec
comes
forward
with
interest
because
I'm.
D
F
Not
a
fan
of
of
write-only
documents.
If
I'm
convinced
someone's
gonna
read
it
then
I'll
write
it.
D
Right
yeah,
if
it's
worth
the
time
in
terms
of
timing,
you
know
everyone's
mentioned
10,
but
I
do
wonder
we
have
to
do
11
we.
So
we
have
to
kick
out
a
jakarta
e9
that
supports
java
11.,
and
you
know
I
wonder
if
that
would
be
the
ideal
time
to
do.
Some
of
this.
G
G
That
is
weird,
so
I
I
was
just
going
to
say:
I
mean
we
need
to
have
discussions
about
like
in
the
case
of
jpa,
in
the
platform
tck
in
the
jpa
source
tree,
there's
an
ee
folder
that
makes
the
container
level
test
mostly
visible
and
obvious.
So
you
know
following
the
idea
that
the
platform
level
test
would
stay,
maybe
in
the
core
for
jpa,
but
others
would
move.
I
guess
I
mean
that
that
would
be.
F
I
would
say:
don't
touch
jpa
at
all.
I
mean
this
is
this
is
really
this
is
kind
of
a
problem
of
sharpening
the
x
versus
chopping
wood
right.
I
think
we
still
have
a
lot
of
wood
to
chop
with
json
b
and
some
of
the
newer
specs,
but
j
things
like
jpa
and
servlet
and
ejb
are
mostly
feature
complete.
They
don't
really
have
much
wood
to
chop
so
to
speak,
so
it
doesn't
really
make
sense
to
invest
in
improving
the
tools
for
those.
D
D
B
So
so
so
we
have
1004,
we
need
to
close
it,
but
andy.
You
still
need
to
write
some
things
in
the
agenda.
So
can
we
throw
at
you
the
task
of
copy
paste,
the
entire
agenda
and
just
send
it
in
a
new
thread,
say
you
know,
tck
call
the
date
as
a
subject
and
then
copy
paste.
The
entire
agenda
after
after
you
are
done
and
then
also
add
the
link
to
it
and
just
nothing
else.
Can
you
do
that
publish.
B
B
I
wonder
if
you
can
join
again
a
future
friday,
because
I
believe
you
have
so
much
knowledge
and
I
think
we
have
a
disadvantage
on
the
level
that
we
need
to
convert
you
into
understanding
that
some
of
us
believe
that
documenting
now
is
so
important
and
there
is
much
power
onto
that
and
you
are
one
of
the
people
that,
through
microprofile
I
bring
in
the
coolest
lightness
on
that
side
of
the
fence
that
we
need
to
decay
here.
F
Yeah
that
got
brought
up
on
the
mailing
list
as
well-
I
think
we'll
have
to
I
do-
have
to
run,
though
so
we
can.
We
can
hash
that
out
next
time,
yeah,
okay,.