►
From YouTube: IETF-CORE-20220427-1400
Description
CORE meeting session at IETF
2022/04/27 1400
https://datatracker.ietf.org/meeting//proceedings/
F
F
A
A
Screen
sharing,
yes,
I
think.
F
Yeah
but
then
I
can't
take
notes
at
the
same
time.
Oh,
I
probably
can.
I
just
have
to
take
an
open,
a
new
tab,
and
then
you
screen
one
moment.
A
A
A
Yeah,
thank
you.
So
we
really
have.
A
If
you
do
scroll
up,
there
is
something
about
the
node
well
there
as
well.
So
just
so
so
we
have
covered
that
and
you
go
down
to
the
three
agenda
items.
A
Yeah
there
is
a
no
drill
and
we
we
have
three
items
we
could
be
talking
about.
One
is
call
conf,
one
is
problem,
details
and
one
is
profiling,
ad
hoc
for
co-op
and
oscore,
and
I'm
seeing
ricardia
great
and
I'm
seeing
myself.
A
A
But
let
me
just
say
on
on
the
car
confront,
as
you
may
have
noticed,
the
yang
zebra
was
approved,
so
the
next
one
in
the
set
of
four
documents
is
the
corset
document
there.
There
are
two
things
that
need
to
be
done
here.
A
One
is
we
actually
wanted
to
make
some
last
minute
changes
in
yang
zebo
that
actually
turned
out
to
be
too
complicated
to
do
at
the
last
minute,
and
actually
there
were
they
were
about
the
the
strategy
we
are
going
to
use
for
curating
the
the
sid
space.
So
we
might
as
well
do
that
in
the
corset
document.
So
the
the
objective
is
to
write
this
up
and
deal
with
the
unexpected
complexity
of
writing
this
up.
A
So
that's
one
item
and
the
other
item
is
that
we
we
had
a
discussion
with
the
young
catalog
people
about
how
to
integrate
between
their
processes
and
and
the
processes
that
sid
needs
and
there
there
are
notes
from
that
discussion
that
have
to
be
translated
into
text.
That
goes
into
the
sit
document.
A
So
so
far
we
only
have
discussed
this
with
ayanna
and
that
that
was
one
of
the
reasons
why
it
took
so
long
to
have
to
get
this
document
done,
but
we
now
also
have
to
to
put
in
the
catalog
stuff
if
we
want
to
make
use
of
their
services,
and
I
think
that
that
would
be
a
pretty
useful
thing
to
to
have
so.
This
is
currently
stalled
on
available
time
from
my
site,
as
usual,.
A
A
E
We
have
in
working
group
last
call
and
anima
has
an
allocation
in
the
corset
document
already
right.
So
we're
not
we're
not
waiting
for
the
registry
to
be
created
because
we're
going
to
be
in
the
initial
data,
so
so
delays
don't
affect
us
that
way.
So
getting
it
right
is
just
fine.
E
I
think
I
think
we've
had
a
the
conversation
in
with
the
yang's
core
yang
about
how
do
we
do
the
this?
What
was
it
the
temporary
allocations
when
we
do
abyss
document-
and
I
don't
know-
maybe
we
need
some
diagrams
or
something
like
that-
to
explain
it
to
people,
but
I
guess
we've
we've
settled
that
right.
I
think
we've.
I
think,
we've
settled
that
with
the
iesg
review,
as
you
say,
that's
all
done,
and
but
we
I
just
don't
know.
E
Where
does
that
sid
file
live?
I
thought
it
was
in
core
sid.
Actually
it's
defined
as
yeah
in
as
as
a
yang
definition
in
core
sid,
and
I
proposed
a
pull
request
on
it
to
add
what
did
I
add
and
I
added
a
an
enumerated
type,
because
I
didn't
think
that
boolean
was
going
to
satisfy
us
in
the
long
term
and
but
I
don't
think
we
had
a
good
conversation
about
that
and.
E
Well,
I
I
I
think
it
requires
a
conversation,
maybe
in
in
I
forget,
which
one's
net
mod
and
which
one's
net
conf,
but
one
of
those
two.
Maybe
it
requires
a
conversation
there
to
motivate
the
process.
E
E
Yeah
that
one
exactly,
I
think,
we're
in,
I
think
we're
in
a
working
group
last
call
right
now.
A
Yeah,
okay,
but
I
think
we
can
take
that
offline,
so
I
think
it's
important
to
know
what
actually
is
slowed
down
by
the
remaining
work
and
yeah,
but
we
can
take
that
offline.
Don't
think
we
have
to
decide
anything
at
this
point,
I'm
just
having
a
quick
look
into
the
normative
references.
E
Yeah
yeah
yeah,
we'll
we'll
we'll
we'll
we'll
delay
in
the
rc
editor
queue,
but
we
still
have
iesg
review
to
go
through
yeah.
So
it
could
very
well
be
a
you
know,
a
tortoise
race
to
say
who's
the
last,
but
I'm
not
worried
about
that.
A
Yeah
sometimes
the
rc
editor
puts
you
in
into
a
cluster
and
something
gets
even
without
a
normative
reference
and
something
gets
stuck
in
that
cluster
and
that
can
be
a
lot
of
fun.
A
Okay,
good,
but
I
think
that
that
was
all
I
needed
to
know
for
corset.
So
do
we
have
any
other
documents
that
actually
are
waiting
for
this.
E
I'm
not
aware
of
any
other
users.
I
think
that
a
bunch
will
show
up
in
net
mod
as
soon
as
this
is
finally
published,
who
will
say:
oh
yeah.
Actually
we
want
to
do
this.
Based
on
what
andy
has
said
about
the
the
network,
overhead
of
xml,
it
sound.
He
was
very
enthusiastic
about
making
that
smaller
and
code
faster
and
that's
worth,
I
think,
repeating
places.
E
But
I
would
say
that
we
probably
should
have
a
tech
talk
at
the
ieb
about
this,
and
I
think
a
number
of
people
will
come
and
say:
oh
okay,
that
lets
me
serialize
yang
to
snivore
in
a
much
more
mechanical
way,
and
maybe
we'll
have
you
know
a
fair
number
of
adoptions
suddenly
of
work.
That
is
halfway
done.
I'm
thinking
like
co-rim
or
some
other
stuff
like
that.
That
may
directly
benefit
that
way.
A
Okay,
but
that's
kind
of
future,
I
was
trying
to
do
a
reference
by
check.
I
don't
know
how
the
video
triggers
broken
right
now.
E
For
sure,
but
I
don't
think
there's
any
other
other
users,
I
I
think
if
there
were
I,
we
would
have
tripped
over
each
other.
G
Right,
a
lot
has
happened
recently
and
ari
brought
to
our
attention
that
3gpp
is
seriously
considering
problem
details,
hoping
for
it
to
be
finalized
very
soon.
Like
mid-june,
but
yeah
problem
b,
then
details
has
become
a
choral
document
last
year
and
if
it
has
to
be
tied
to
to
coral
there's
no
chance,
it
can
be
completed
so
quickly.
So
you
may
have
noticed
a
change
of
direction
to
try
to
have
a
first
version
that
can
still
be
good
to
be
used
by
3gpp,
leaving
the
the
coral
version
for
later.
G
So
the
document
version
submitted
today
is
an
attempt
to
take
this
this
new
direction
and
speaking
of
details.
It
has
also
two
alternatives
to
consider
today
and
details
are
in
carson's
lights,.
A
A
Good,
so,
just
as
a
reminder,
what
problem
details
was,
this
is
essentially
looking
at
rfc,
7807
and
thinking
about
the
concise
equivalent
for
that.
So
what
is
7807?
A
A
A
It
has
additional
information
like
a
title
and
a
detail,
and
even
an
instance
pointer
where
you
might
be
able
to
get
additional
information
about
the
problem,
and
it
also
allows
other
entries
in
this
problem
details
map.
In
this
example,
it's
the
entries,
balance
and
accounts,
which
are
added
by
the
server
that
is
providing
the
problem
description.
A
A
So,
interestingly,
the
detail
has
some
information
that
the
the
additional
entries
don't
have
the
number
50
but
yeah.
That's
a
realistic
example.
So
that's
where,
where
we
were
coming
from
the
7807
people
are
actually
working
on
abyss
right
now,
trying
to
to
make
use
of
things
they
have
learned.
But
this
is
ongoing,
will
take
some
time,
so
we
probably
won't
be
able
to
actually
use
78
or
7
bits
for
this
work,
but
there's
still
an
extremely
interesting
issue
tracker
out
there,
which
shows
what
the
kind
of
problem
you.
A
A
So
the
the
last
draft
is
from
july
2020
and
we
decided
this
is
really
a
useful
application
for
coral.
So
we
might
as
well
make
use
of
coral
but
yeah
that
will
take
some
more
time.
We
are
not
yet
done
with
coral
and
now
the
problem
I'm
overusing.
The
word
problem
today
is
that
3gpp
apparently
needs
this
now,
so
they
have
a
release
coming
up
in
mid-june
and
they
want
to
be
able
to
use
the
concise
problem
details
alongside
the
http
ones.
A
So
that's
the
background
and
we
are
now
looking.
Is
there
a
way
for
us
to
to
come
up
with
a
concise
problem,
details
document
by
then
so
this
will
require
some
pretty
quick
thinking
to
make
it
happen.
A
So
yeah
one
one
thing
that
the
the
rfc
7807
people
learned
is
that
using
a
uri
reference
for
type,
it
doesn't
really
work
very
well
uri
reference,
one
form
of
ui
reference,
of
course,
is
just
a
short
string
like
like
out
of
stock
out
of
credit
here,
and
the
problem
is
when
you
write
that
into
7807
problem
detail,
since
it's
a
relative
uri
reference.
A
This
is
resolved
with
the
current
base
and
the
current
base
is
the
problem
jets
document,
so
you
would
have
to
have
kind
of
in
the
same
directory,
something
that
is
called
out
of
credit
that
you
actually
resolve
this
to
and
since,
of
course,
problem
details.
Documents
are
usually
supplied
from
from
very
different
places
in
the
the
path
tree
that
that
goes
wrong
most
of
the
time,
so
one
thing
we
discussed
is
whether
we
shouldn't
go
fully
in
the
direction
that
78
was
having.
A
This
is
picking
up
partially
now,
which
is
actually
making
use
of
registries,
and,
of
course,
we
we
have
one
registry
that
we
simply
can
use
for
registering
types
of
things
and
that's
the
sibo
tag
registry.
A
So
my
idea
was
to
say:
oh,
let's
just
describe
how
to
define
your
own
sibo
tag
for
your
own
problem
type
and
that's,
what's
actually
in
the
internet
drafts
directory.
Now
this
on
the
slide,
there
is
a
pointer
to
the
github,
but
I
also
submitted
this
as
an
internet
draft,
because
this
is
an
interim
and
we
should
really
be
talking
about
things
in
the
internet
drafts
directory.
A
A
So
not
not
many
problem
types
will
have
the
need
for
a
detailed
entry
called
balance,
but
this
one
does
and
the
description
should
explain
what
that
actually
means,
and
now
that
we're
talking
about
registries,
we
might
actually
also
want
to
have
a
registry
for
standard
problem
details
like
like
type
detail
and
so
on.
So
if
we
apply
this,
we
get
something
like
this.
So
this
is
the
seaboard
tag.
4711
would
be
the
registered
zebra
tag
for
the
problem
type
defined
by
example.com,
and
so
we
don't
have
a
ui
any
longer.
E
Is
example.com
here
a
a
specific
vendor,
or
is
it
like
a
consortium
that
has
like
categories
that
entire
market.
E
A
Okay,
so
this
this
would
be
the
the
zebra
tag
for
the
problem
type.
This
is
applied
to
a
map
and
the
map
has
a
number
of
standard
problem,
charts
entries
which
get
negative
numbers
in
my
proposal,
and
it
has
unsigned
integer
entries
which
are
a
custom
problem
details,
so
they
are
specific
for
to
the
type
that
is
expressed
by
this
sibo
tag,
and
I'm
calling
this
the
the
bundled
approach,
because
it
essentially
has
has
one
step.
A
You
define
your
problem
type
and
you
define
your
custom
problem,
literally
entry
keys
with
that,
and
that's
about
the
technical
substance.
We
don't
really
need
a
lot
more
here.
So,
of
course
there
we
still
have
a
discussion
of
these
predefined
or
standard.
A
So
this
is
a
solution,
that's
pretty
oriented
towards
using
sebor,
which
is
the
natural
thing
for
for
something
that
I
came
up
with,
and
I
wrote
this
up
and
that's
in
the
draft
right
now,
but
of
course,
then,
which
is
always
the
problem
if
you
work
with
people
who
are
brighter
than
you,
thomas
came
up
with
another
way
of
doing
this,
the
which
I
I'm
calling
the
unbundled
approach.
A
So
he
just
completely
gets
rid
of
the
problem
type
and
instead
uses
a
global
registry
for
these
custom
problem
details
entries
so
that
there
is
no
longer
a
bundling
like
in
the
previous
case
where,
where
47
11
is
the
bundle
that
defines
12
and
17.,
but
you
would
just
define
the
problem
detail
entries
you
need
and
of
course
it
makes
a
lot
of
sense.
Then,
to
have
these
be
maps,
so
you
don't
need
to
have
10
of
these
for
your
next
problem
type,
but
you
actually
can
define
one.
A
So
those
who
know
what
the
cwt
is
will
will
feel
right
at
home,
because
that's
exactly
the
idea
how
you
put
together
your
cwts,
you
use,
registered
claims
and
from
these
claims
you
actually
generate
claim
sets,
and
here
you
would
use,
registered
custom
problem
details
and
build
your
your
problem,
detailed
description.
A
So
as
an
example,
this
looks
like
this.
You
have
the
the
standard
problem
detail
entries.
This
would
continue
to
exist
and
would
also
be
a
registry,
and
then
you
have
the
problem.
A
A
The
outer
threaded
balance
and
example.com
also
has
defined
another
custom
problem,
detail
entry
key
that
they
are
using
in
several
of
their
problem
types,
but
problem
types
don't
exist
anymore,
and
so
they
they
just
say
which
accounts
were
available
for
doing
this
transaction.
A
A
Another
way
of
doing
this
would
be
to
say:
okay,
we,
we
just
define
a
single
custom
problem,
detail
entry,
which
here
again
gets
the
number
47-11,
because
that
has
an
internal
structure.
It's
just
another
map,
as
you
can
see
from
the
the
nested
braces,
and
it
has
an
internal
structure
that
are
local
to
this
4711,
so
example.com
when
they
define
what
what
their
custom
probability
problem
detail.
A
Entry
key
4711
means
would
define
how
this
map
is
structured
and
describe
what
zero
and
1
mean,
which
I
describe
with
comments
here
in
this
zebra
diagnostic
notation
and
from
there.
Of
course,
you
could
also
have
alternate
ways
of
doing
this.
So,
instead
of
registering
a
number,
you
could
use
a
urn
like
this
tag.
A
Uri
here,
that's
just
one
example.
You
might
as
well
use
an
https
one,
but
this
uses
tag
because
it's
probably
particularly
useful
in
this
context.
A
A
tag
uri
is
a
tag
that
makes
use
of
a
dns
name
like
in
https.
Your
eye
would
do,
but
it
it
allows
you
to
say
that
this
is
the
state
of
the
dns
name
at
a
specific
point
in
time.
So
this
here
means
whoever
had
example.com
in
2022
defined
something
called
out
of
credit,
so
with
http
https
you
have
the
problem
that
you,
you
don't
know
whether
the.
A
Domain
name
actually
changed
hands
so
with
by
adding
a
date.
You
you
make
this
actually
a
permanent.
So
that's
the
useful
thing
about
the
tag
ui,
but
that's
just
an
example
here.
A
So
that's
the
same
thing
we
had
before,
except
that
example.com
is
not,
has
not
gone
to
the
trouble
of
actually
registering
the
thing,
but
has
just
made
sure
there
is
a
permanent
url.
Of
course,
the
disadvantage
is
that
the
thing
is
even
longer
than
it
was
before
yeah
another
example.
Here,
of
course,
you
can
also
have
two
custom
problem,
detail
entries
with
your
eyes,
and
these
actually
can
be
structured
internally.
So
this
shows
how
to
do
composition,
so
you
don't
have
to
to
describe
your
entire
world
for
every
single
problem.
A
So
these
are
examples
for
the
the
unbundled
approach.
This
is
not
yet
written
up,
but
it's
probably
a
small
matter
of
editing
to
put
that
instead
of
the
bundled
approach
into
the
draft
and
yeah
one
one
question
is:
is
that
something
we
can
decide?
I'm
I'm
pretty
open.
I
I
do
like
my
approach
of
doing
things,
but
I
also
like
thomas
approach
here
and.
C
What
what
is
the,
what
is
the
event
in
mid-june
that
matters,
the
3gpp
release?
Okay,.
A
D
D
B
B
So
I
mean,
of
course
it
would
be
better
to
have
an
rfc
etc,
but
the
3gbp
folks
do
understand
it's
a
bit
overly
optimistic,
but
if
we
can
have
a
version
that
we
can
say
okay,
this
is,
you
know,
feature
stable
and
this
was
going
to
be
used.
They
would
be
willing
to
use
in
this
version.
Then,
in
the
following
version:
just
update
the
reference.
H
Yes,
okay,
so
in
terms
of
readiness,
apart
from
the
document,
would
you
need
the
ayana
registries
in
there
right
in
order
for
this
to
be
usable.
A
The
the
registries
will
start
exist
at
the
time.
The
document
is
approved.
F
So
what
wasn't
quite
clear
to
me
from
this
presentation
is
what
the
road
forward
with
problem
details
should
be.
My
understanding
from
early
discussions
was
that
we
might
still
have
later
coral
based
problem
details
that
are
more
powerful
and
more
expressive
and
in
some
way
isomorphic
to
the
problem
details
we
do
now.
F
If
that
is
the
case,
then
I
would
prefer,
if
we
went
for
something
where
people
register
their
problem
details,
be
it
the
the
tag
based
one,
which
I
the
bundled
one,
which
I
rather
like,
because
it's
a
bit
easier
or
one
of
the
one
or
one
of
those
with
registered
indices,
because
then
we
can
use
that
input
while
creating
the
mappings
and
the
and
getting
the
coral
based
problem
details
done
if
we
don't
want
to
go
there.
That's
mood,
of
course,.
F
It's
it's.
I
either
would
work,
but
if
people
start
defining
their
own
tags,
we
lose
track
of
what
is
used
out
there,
and
then
there
might
be
usage
patterns
that
don't
translate
that
well
and
if
people
are
using
tags,
especially
if
they
tags
or
anything
other
registered,
we
can
look
at
what
people
are
doing.
F
We
have
an
expert
review
that
might
at
least
give
the
people
the
hints
that
they
might
need
to
stay
working
even
with
a
future
coral
based
format
that
might
be
that
would
be
evolving
already
at
that
time
and
not
kind
of
later
phase
situations
where
people
did
something
like
in
the
earlier
example
of
the
of
the
http
based
this
document
that
just
didn't
work
out
the
way
the
authors
expected
it.
A
Yeah,
of
course,
an
interesting
variable
here
is
what
what
is
actually
the
allocation
policy
for
the
various
registries,
so
for
for
sibo
tanks,
it's
pretty
clear
that
people
can
do
fcfs
and
will
do
fcfs,
so
you
wouldn't
get
the
benefit
off
of
an
expert
review
and
for
the
the
other
ones
in
in
the
unbundled
model.
I
probably
also
would
tend
to
make
most
of
that
registry
fcfs.
G
F
Even
in
an
fcfs
space,
we
at
least
see
what
is
going
on
yeah
as
compared
to
yeah
we
would
have.
Can
we
can't
find
out.
H
Christian,
could
you
could
you
articulate
a
bit
better?
What
the
worry
is
here,
because
it
seems
to
me
that,
compared
to
7807,
we
we
are
establishing
a
a
forward
compatibility
policy
here
from
from
the
onset
which
they
don't
have
and
that's
what.
F
I
mean
first,
first
of
all
there
there
is
the
issue
of
discoverability,
so
once
we
def
once
we
have
a
coral
mapping
between
once
once
we
define
a
coral
based
problem
details
and
an
interconversion
there,
any
converter
would
have
to
know
which,
which
detail
types
there
are
and
for
the
arbitrary
uri
based
ones.
They
can't
know
for
the
registered
ones.
F
We
can
at
least
ask
that
there
is
a
referenceable
pointer
that
they
can,
that
they
can
look
at
and
maybe
even
later
describe
what
they
can
place
there
or
some
what
they
can
publish
in
terms
of
mapping.
If
that's
just
attack,
uri
there's
no
way
that
people
find
this
and
in
terms
of
what
could
actually
be
things
in
theirs.
F
For
example,
we
have
various
places
in
this
example
where
there
are
uri
references,
we
might
want
to
consider
using
cri
reference
cri
reference
in
there,
because
they
have
a
chance
of
being
sufficiently
stable
by
them,
but
a
list
of
cri
references
is
something
that
we
would
need
to
like
is
here
in
available
accounts
is
something
that
we
would
need
to
consider
when
describing
this
mapping.
G
So
kristen
do
you
expect
an
easy,
almost
automatic
conversion
from
this
this
tags
into
correlation
types?
G
I'm
going
to
say
again:
do
you
expect
you
for
see
an
easier
conversion
from
the
tags
registers
for
problem
types
into
later
on
problem
types
in
coral
or?
Yes,
if
it's,
if
it's
it's.
F
G
B
Good
thanks,
so
I'm
thinking
about
this
kind
of
short-term
and
long-term
considerations
and
the
kind
of
the
evolution,
but
one
of
the
long-term
considerations,
maybe
the
key
short
term
considerations
just
to
double
check
that
they
are
covered.
Of
course.
First
would
be
this
feature,
parity
with
the
http
problem
details,
but
I
guess
that's
quite
to
me.
B
It
looks
okay,
but
I
just
wanted
to
make
sure
that
we
can
express
the
same
things
we
do
with
the
other
on
the
atp
side,
also
on
co-op
side,
and
then
the
second
part
would
be,
if
I
understood
correctly,
the
current
http
problem
types
they
have
uris
in
the
fight-
and
here
we
wouldn't
have-
is
the
mapping
from
one
to
another
trivial
and
one
of
the
steps
for
someone
having
to
do
that.
That
might
be
a.
I
don't
know
useful
appendix
perhaps
in
that
document,
to
explain
that.
H
Or
seven?
Okay,
yes,
I
think
I
think
both
both
questions
are
yes,
a
there's.
You
know
they're
sort
of
isomorphic
and
b.
We
could
add.
You
know
translation
hints
in
an
appendix
to
explain
how
to
go
from
one
to
the
other.
G
Christian
you
mentioned
yeah,
minimal
cooperation
would
be
good
from
thug.
Authors.
Do
you
have
in
mind
any
particular
recommendation
or
guideline
for
registering
tags
about
this
to
ensure
this,
or
how
do
you
shape
this
cooperation.
F
By
a
new
person,
you
hear
much
better
right,
yes.
Well,
first
of
all,
I
I
somehow
agree
with
carsten
that
fcfs
is
a
good
policy,
so
we
can't
make
any
requirements
there.
F
What
I
roughly
envision
for
the
conversion
process
is
that
by
the
time
we
have,
we
have
coral
problem
details
we
would
suggest
there
would
be
some
description
of
the
of
the
packing
dictionaries
that
are
used
in
coral
anyway,
that
are
needed
for
practically
using
any
color
document,
and
in
the
document
that
describes
problem
details
for
coral,
we
could
describe
additional
data
that
is
in
there
that
allows
the
conversion
so,
for
example,
taking
the
taking
the
out
of
credit
balance
here.
F
The
problem
details
the
the
keyboard
packing
the
kind
of
external,
simple
packing
data
that
is
not
necessarily
what
applications
use,
but
something
that
conveniently
an
application
also
would
provide
that
describes
how
a
choral
document
describes
out
of
credit
would
have
its
dictionary
items.
That
says
that
have
say
number
five
is
out
of
b
credit
balance
and
number
six
is
available
accounts
with
full
your
eyes.
In
the
background
and
all
the
things
you
need
to
make
it
coral.
F
So
those
would
be
numbers
that
go
into
go
into
the
tag
numbers
in
packed
zebra
and,
along
with
that
information
there
could
be
the
additional
data
that
says,
and
by
the
way
in
in
pre-choral
problem
details
a
doc,
a
problem
that
has
this
property
would
map
that
out
to
say,
non-standard
detail,
four
seven
eleven
sub
detail,
zero
and
the
other
one
would
go
into
four
seven
eleven
sub
detail:
two
with
the
conversion
policy.
They
are
always
integers
with
the
conversion
policies.
F
That's
one
form
of
cooperation
I
envision
and
now.
Certainly
there
will
be
people
who
have
published
by
that
time.
Problem
details,
problem
types
that
don't
have
that
will
never
get
that
information
and
someone
might
later
still
define
a
dictionary
for
them.
But
if
someone
converts
it,
then
they
can
just
do
this
in
a
single
document.
G
G
A
Yeah
but
the
the
way
that
thomas
proposal
does
this
actually
is
is
pretty
much
painless,
so
I
think
that
the
pain
that
they
are
having
on
the
7807
side
is
probably
more
more
an
issue
of
of
baggage
coming
from
the
existing
format
than
something
we
would
necessarily
incur.
A
But
nested
tax
sounds
really
weird,
because
then
you
have
two
two
namespaces
for
numbers
that
might
conflict
so
instead
stretching
this
over
via
the
the
custom
problem.
Detail
entries
sounds
much
more
likely
to
work.
B
G
A
B
So
it
was
called
cause
and
invalid
params
now
have
to
regard
the
spec
to
actually
see
what
they
mean,
but
yeah.
There
were
two
two
custom
entries
with
those
names.
B
But
it's
that
3gbp
technical,
spec,
29
122.
I've
contacted
the
information.
I
can
also
try
to
dig
it
up
now.
A
A
A
So
what
I
would
like
to
take
out
of
this
meeting
is
that
that
you,
that
the
working
group
gives
thomas
and
me
a
little
bit
of
leeway
to
actually
do
an
o3
of
this
document,
either
increasing
the
detail
on
the
bundled
approach
or
including
the
unbundled
approach
and
then
have
a
very,
very
quick
transition
to
a
working
group.
Last
call.
B
A
G
G
Okay,
then
we
can
move
to
the
next
item.
That's
the
oscar
dock.
Draft
and
record
will
present
yes,
hello.
Everyone.
G
I
Yes,
perfect,
yes,
so
today
I
would
like
to
present
this
profiling
editor
for
co-op
and
oscar
draft
or
other
updates.
That's
been
done
to
that
one.
So
let
me
get
into
the
presentation
for
unfortunately
on
my
screen,
it
looks
a
bit
blurry,
the
figure,
so
apologies.
If
it's
the
same
on
your
end,
hopefully
it's
good
enough,
but
just
to
recap.
So
what
is
ed
dock
in
the
first
place?
Well,
it's
a
lightweight
authenticated
key
exchange.
I
So
what
this
document
wants
to
do
basically
is
to
provide
an
optimized
workflow
for
this
key
establishment
procedure,
where
you
actually
combine
one
of
the
ad
doc
messages
with
an
oscar
request.
So
you
can
actually
combine
the
third
edit
message
with
the
first
oscar
request
that
the
client
wishes
to
so
to
send,
and
then
the
main
point
of
this
is,
of
course,
to
reduce
the
amount
of
round
trips
before
you
can
actually
start
your
oscar
communication.
I
So
essentially,
this
saves
you
one
round
trip
compared
to
the
vanilla,
workflow
defined
in
edoc
and
other
parts
that
this
document
covers.
I
mean
it's
a.
It
also
covers
some
other
general
points
about
the
addoctoral
score,
transported
or
co-op,
and
this
includes
things
such
as
the
conversion
of
oscar
ids
to
adopt
ids,
because
they
both
have
a
concept
of
identifiers
in
both
protocols
essentially-
and
it
also
covers
oscar,
specific
processing
of
edit
messages,
extension
and
consistence
of
the
ad
doc.
I
Application
templates,
which
are
let's
say,
configuration
parameters
for
a
particular
ad
hoc
resource
that
controls
some
elements
of
that
of
execution,
and
it
also
covers
then
the
web
linking
for
discovery
of
adobe
resources
and
their
application
templates,
including
elements
within
those
templates.
So
this
can
help
you
discover
another
resource
and
also
appropriate
configuration
to
use
with
that
particular
adobe
resource.
I
So
one
point
that
was
updated
was
regarding
the
client
processing
for
that
of
classical
request,
and
now
it's
clarified
that
you
should
not
have
more
than
one
outstanding
interaction,
so
not
more
than
one
of
these
at
the
place.
Oscar
requests.
I
Assuming
that
they
are
the
edoc
oscar
request
for
the
same
server
and
they
are
related
to
the
same
education
identified
by
cr,
so
under
those
constraints
you
should
not
have
more
than
one
outstanding
interaction,
and
this
is
kind
of
to
ensure
that
the
client
impatient
impatient
client,
you
know,
will
not
flood
the
server
with
excessive
amounts
of
another
point
that
was
updated
was
on
the
server
processing
side.
I
Now
it's
more
clarified
that,
once
adult
message,
3
has
been
processed
fully,
the
server
will
build
the
oscar,
protect
the
application
request
and,
what's
now
explicitly
stated
and
clarified,
is
that
now
you
at
this
point
when
you
have
a
rebuilt
task
or
protected
application
request,
you
can
remove
the
edit
option
from
that
request,
and
this
edit
option
is
attached
to
that.
I
The
plus
oscar
request
for
signaling
purposes
for
making
clear
that
this
is
an
educator
request,
but
internally
in
the
server
as
you
do,
the
message
processing
at
after
you
have
rebuilt
those
core
requests.
You
can
remove
this
at
the
option
because
it's
not
needed
from
the
unknown
since
it's
just
signaling
to
the
add
dock
processing.
For
what
to
do
with
this
request,
meaning
to
extract
that
docker
relevant
patch
and
actually
run
end
dock
based
on
that
message?
I
In
that
case,
the
server
would
like
to
have
a
clear
indication
of
exactly
which
message
is
actually
including
this
end
of
message.
Three
and
as
the
blocks
are
being
reassembled,
and
you
should
remove
this
option.
E
I
So
that's
that
point
so
some
further
updates.
Now
we
also
update
the
sections
related
to
how
you
select
the
add
the
connection
identifiers
on
both
client
and
server,
and
it
just
has
more
precise
guidelines.
Now,
it's
basically,
the
point
is
to
be
consistent
with
the
uniqueness
requirements
in
the
score
rfc,
in
the
sense
that
this
identifier,
you
choose,
should
be
available
overall,
and
it
must
be
available
among
the
security
contexts
that
you
have
with
zero
length
id
context,
because
the
way
currently
educates
defined
you
will
have
a
cr-length
id
context.
I
So
you
really
want
to
show
that
well,
whatever
identifier
you
choose
doesn't
collide
with
any
other
existing
security
context.
Now,
if
you
have
other
security
contacts
with
a
non-zero
length
id
context,
that's
fine,
because
you
can
use
that
to
disambiguate
between
them
further
editorial
fixes
and
improvements.
Well,
one
point
here
is
to
modify
so
no
longer
say
perfect
forward
secrecy,
but
rather
simply
forward
secrecy
and
yeah.
The
figures
have
also
been
improved
example
figures,
and
one
point
that
has
been
highlighted
is
that
the
cr
is
not
in
the
payload
of
the
endoclass
oscar
request.
I
Rather,
the
server
will
recompute
this
from
the
ked
of
those
corruption,
because
it's
really
redundant
information
to
have
both
the
cr
and
the
kd.
Since
there
is
the
identifier,
and
since
the
identifier
is
already
in
the
oscar
option,
as
oscar
is
defined,
it's
sufficient
to
have
it
in
there
and
from
that
one,
you
can
calculate
cr
for
edoc.
I
I
proceed
to
the
next
slide,
so
another
question:
when
can
this
a
combined
ad
doc
class
oscar
request
get
too
big?
Well,
maybe
you're
using
a
large
id
credit,
for
instance
a
certificate
chain,
or
maybe
using
a
very
large
ead3
for
the
external
authorization
data.
So
in
that
case,
as
these
at
this
end
of
paso
score
request
can
get
big,
let's
say
depending
on
how
much
information
you're
putting
in
eadt
or
orderly
credit.
I
So
then
this
is
relevant
because
of
possible
use
of
block
wise.
So
on
the
client
side.
Well,
first
of
all,
you
do
oscar
protection
of
each
in
a
block
as
usual,
but
if
the
block
you're
currently
processing
is
not
the
first
one,
meaning
that
the
block
one
option
is
not
zero,
then
the
client
must
not
add
the
add-on
option
but
simply
send
the
protected
request,
as
is
because
the
key
point
is
that
only
the
first
actual
inner
block
will
have.
I
Message:
three:
the
actual
add-on
data
within
not
the
subsequent
blocks
and
so
only
for
block.
So,
as
the
next
point
says,
if
the
protective
block
is
the
first
one
meaning
you
have
block
one
set
to
zero
and
the
adduct
message,
three
or
oscar
ciphertext
is
larger
than
the
maximum
fragmented
size.
So
the
maximum
fragmented
size
is
a
parameter
within
all
score
that
defined
there
to
avoid
a
proxy
that
could
otherwise
like
inject
arbitrary
amounts
of
blocks.
I
So
in
this
case,
if
that
condition
holds
true,
you
are
bored
and
you
can
possibly
switch
the
original
vanilla
enter
workflow
and
yeah.
One
one
point
is
that
basically
no
further
in
the
clockwise
can
happen
once
that
the
plus
oscar
request
is
assembled,
so
the
workload
is
really.
You
start
with
the
request
and
you
split
it,
and
then
you
apply
the
oscar
protection
and
for
that
first
block
only
you
add
a
dog
data,
and
now
you
already
you're
already
through,
I
mean
you're,
already
split
it
into
blocks
at
this
point.
I
So
you
can't
do
you
know
yet
another
block
by
splitting
so
yeah
as
it
states
when
you
have
this
finalized
order,
playlist
or
request
and
there's
no
further
interlocus.
That
can
be
done
at
that
point
because
it
happens
before
yeah
and
just
feel
free.
If
you
have
any
comments
on
this
to
jump
in
yeah.
D
F
What
I'm
a
bit
confused
about
is:
how
can
you
have
an
ad
hoc
request
in
a
in
an
outer
block
in
and
out
in
a
non-initial
outer
block
wires
message?
In
the
first
place
I
mean
the
outer
block.
Wise
is
only
this
happens
within
and
within,
as
within
the
given
oscar
context.
So
there
is
already
the
oscar
context
and
having
ad
hoc
is
kind
of,
I
don't
see
how
this
would
happen
in
the
first
place,.
F
Yeah
is
it
I
concluded
that
this
is
about.
Other
block
was
because
it
says
oscar
protection
for
each
inner
block
is
as
usual
so
as
it
should
be.
Yeah.
I
F
Yeah,
I
think,
as
a
client
might
have
good
reasons
to
do
out
of
block
wise
as
well,
but
this
should
be
completely
independent
of
the
ad
hoc
process,
as
in
yeah.
I
I
So
well
so
what
now
looking
at
the
server
side,
essentially
for
using
block
codes
without
the
plus
request,
so
on
the
server
side,
if
this
android
plus
request
does
in
fact
have
block
options,
that
means
that
alter
block
wise
was
used
because
the
only
scenario
that
we're
thinking
about
now
considering
that
client
never
uses
outer
block
wise.
So
if
you
see
a
block
option,
I
mean,
of
course,
before
unprotection,
on
this
adderklass
request.
It
must
be
the
case
that
there's
a
proxy
in
the
middle
which
has
done
outer
clockwise.
I
I
I
So
it
should
just
work
out
with
without
a
block
wise,
if
your
proxy
in
the
middle,
because
the
yeah
client
server
just
reassemble
that
as
normal
and
then
be
able
to
do
that,
processing,
extract
message:
three:
execute
the
finalized
ad
doc
procedure
and
generator
score
context,
and
as
a
next
step,
it
can
protect
the
oscar
part
of
the
request.
I
So
well,
this
new
text
on
blockbuster
that
brought
back
an
old
question.
Let's
say
that
was
taught
about
and
discussed
before,
and
that
is
if
you
are
in
fact
using
block
wise
for
the
other
class
oscar
request
in
the
clockwise
settings
from
the
client.
I
So
in
that
case,
when
does
the
optimized
workflow
stop
being
convenient
to
use
right,
I
mean
we're,
comparing
now
the
vanilla
workflow
with
this
optimized
workflow,
and
if
you
have
lockwise
it
may
not
be
as
clear
anymore
and
what's
preferable
or
what
kind
of
choice
that
you
should
consider.
So
we
go
into
that.
A
bit
deeper
now
and
continue
that
so
now
it's
really
covering
like,
if
you
have
the
optimized
workflow
and
you
actually
have
block
wise
from
the
client
that
is
in
the
block-wise
from
the
client.
So
we
have
some
definitions
here.
I
We
have
set
to
make
the
preceding
following
section
easy
to
understand,
and
basically
we
defined
here
a
so
that's
the
size
of
the
actual
application
payload.
The
application
wishes
to
send
and
we
have
b,
which
is
the
size
of
a
dot
message
three,
and
then
we
have
this
limit.
So
the
limit
is
really.
I
I
mean
the
practical
maximum
amount
of
bytes
that
you
can
send
before
you
have
to
use
clockwise
and
that
can
depend
on
you
know
your
network
setup
basically
could
be
the
udp
maximum
datagram
size.
It
could
be
the
ipv6
mtu
so
but
regardless,
whatever
that
limit,
is
you
have
a
certain
limit
for
the
maximum
amount
of
price
you
can
send
before
you?
You
practically
have
to
use
block?
Of
course
you
can
still
use
block
wise,
even
if
you
below
that
limit,
since
you
just
choose
to
do
that.
I
But
this
is
the
practical
upper
limit
and
then
we
also
consider
overhead.
So
this
is
well
overhead
from
all
the
different
layers,
including
oscar
itself,
and
then
we
define
here
the
limit
star,
which
is
then
your
practical
maximum
limit
minus
the
overhead,
and
this
limit
store
then
becomes
the
limit.
You
should
actually
be
considering.
I
As
a
client,
basically
so
then
the
question
becomes
right,
so
sending
the
end
class
oscar
request
that
is
going
to
work
out
fine
in
two
cases,
first
case,
if
you're
not
using
in
a
block
price
from
the
client.
In
that
case,
you
need
a
restriction
that
the
a
is
less
than
the
limit,
meaning
the
size
of
the
application.
Payload
is
less
than
the
limit.
B
I
Basically
well
since
you're,
not
using
in
the
blockchains,
you
really
have
to
make
sure
that
both
a
and
b
and
the
combined
size
of
the
application
paper,
the
network
message
3,
is
below
your
limit
store.
Otherwise,
well,
it's
just
a
too
big
message
to
send.
You
have
to
use
inner
block
wise.
E
E
I
Condition
doesn't
all
through
well,
you
can
still
use
inner
clockwise,
but
then
this
secondary
condition
still
has
to
hold
true,
and
that
is
that
the
adobe
c3
size
must
be
below
limit
star
and
your
block.
Size
plus
b
must
also
be
below
the
limit
star
because-
and
the
reason
is
only
the
application
payload
can
be
split
into
blocks.
So,
if
you
had
that
message,
3
size
is
larger
than
the
limit
store.
Well,
it's
it's
well!
I
You
can't
just
append
that
to
your
first
inner
block
and
send
it
because
you
will
be
exceeding
the
limit
that
you
have
to
consider
on
the
network
and
the
reason
we
involve
the
block
size
here
is
because
in
adobe
c3
you
will
have
the
block
size
amount
of
bytes
from
the
application.
Payload,
then
plus
the
size
of
that
device
iii,
and
that
in
combination
has
to
be
lower
than
the
limit
store.
I
F
I
think
that,
from
an
implementation
point
of
view
before
I
implement
the
kind
of
sending
the
message
three
separately
out
from
these
equations
in
that
time,
I'll
just
implement
sending
sending
the
initial
request
out
of
block
wise
and
then
sending
it
later
in
the
blocks,
without
block
rising
without
other
block-wise.
F
I
E
I
I
I
mean
if,
if
yeah,
if
you're
fine,
with
with
your
client
directly
using
other
blockers
like
that
that
that's
certainly
also
an
option,
but
it's
continuing
a
bit
on
this,
so
just
for
some
practical
guidelines
and
kind
of
rules
in
a
sense
or
just
how
how
things
are
in
practice
that
now
again
like
if
b,
is
larger
than
your
limit
store,
and
you
practically
can't
use
the
other
class
or
score
request,
because
it's
simply
the
end
of
message,
part
edit
message:
340
is
too
large
to
send
it
with
the
con
cons
with
restrictions.
I
We
had
in
mind
of
only
considering
in
a
block-wise
of
course,
and
so
that's
one
rule,
and
the
second
one
is
that
if
your
a
that
is
the
application,
payload
is
larger
than
the
limit
star
or
the
combination
of
a
plus
b
is
larger
than
the
limit
store.
I
Well
then,
practically
you
have
to
use
inner
block
wise,
because
you
can't
send
that
as
a
single
request,
since
it
will
exceed
the
size
yeah,
it
will
be
too
large
to
send
on
the
network
you're
using
essentially
so
then,
you
should
switch
using
interclockwise
and
be
careful
when
choosing
the
block
size
to
make
sure
that
the
block
size
plus
the
edit
message
tree
size
is
lower
or
equal
to
limit
store,
because
that
combination,
since
we
add
the
dot
message
three
after
block
wise
splitting,
you
have
to
make
sure
that
whatever
block
size,
you
have
plus
the
size
but
necessary.
E
I
Not
lower
than
the
limit,
because
if
it
is
you
again
sitting
with
a
message
too
large
to
send
and
yeah
again
right,
you
can
still
use
in
the
blog
press,
of
course,
even
if
you're
not
exceeding
the
limits
or
simply
because
you
choose
to
or
for
whatever
particular
reason
you
want
still
to
use
block
wise,
even
not
strictly
forced
to
use
it.
I
Now,
if
you
are
using
the
block
device,
then
we
calculated
here
the
amount
of
round
trips
for
comparison
purposes,
that
you
would
need
to
complete
at
the
hook
and
then
exchange
oscar
protected
data.
So
in
the
case
of
the
optimized
workflow,
you
really
end
up
with
one
plus
the
seal
of
a
divided
by
the
block
block
size.
So
that's
really
yeah
dividing
the
application
payload
by
the
block
size.
I
And
then
you
add
one
because
you
still
have
the
first
round
trip
of
ad
doc,
which
is
set
up
message.
One
and
that'll
says
two
and
in
your
first
inner
clockwise
message
sent
the
addresses
three
will
be
attached,
so
yeah
and
but
now,
comparing
that
to
the
original
workflow
with
blockpoints.
Of
course,
then,
what
you
end
up
with
is
quite
similar,
but
it's
just
that
you
have
at
the
end
of
the
formula.
I
You
also
have
to
add
the
size
of
the
edit
message
three
over
the
block
size
and
take
the
seal
of
that.
Yes,
because
now
you're
sending
that
as
a
separate
message.
So
you
also
split
that
using
clockwise
as
a
separate
procedure
and
just
to
summarize
or
kind
of
the
conclusion
I
mean
the
optimized
workflow
is
always
more
convenient
here.
You
will
have
the
case
that
rt
I
mean
the
first.
I
So
that's
encouraging
and
good,
but
then
we
continue
a
bit
more
here,
so
yeah
the
first.
The
up
report
here
is,
I
guess
you
probably
can't
see
my
mouse,
but
that
report
is
just
a
recap
of
the
last
slide.
So
we
did
consider
all
that
is
a
particular
corner
case,
and
that
is
when
essentially,
you
could
have
sent
edit
message.
3
like
basically
like
this.
I
If,
if
you
weren't
using
the
optimized
workflow,
you
could
have
sent
the
request
out
of
my
c3
without
having
to
use
clockwise
but
because
you're
using
the
optimized
workflow
you're
forced
to
use
block
wise
due
to
that.
I
I
Well,
you
simply
get
three
here
as
it's
just
a
normal
added
execution
in
that
case,
so
this
can
be
the
the
kind
of
corner
case
where
just
the
the
fact
that
you're
using
the
optimize
workflow
forces
you
to
use
clockwise
because
of
the
exercise
that
well
essentially
since
you're,
combining
adobe
c3
and
those
core
requests,
you
will
get
a
larger
message
to
send
right.
I
However,
the
optimized
workflow
can
still
not
be
worse
like
it.
It
will
not
divorce
in
terms
of
round
trip
time
round
trips,
and
so
it
it
does
depend
on
what
block
size
you
choose,
and
I
mean
ideally,
you
should
choose
such
a
block
size
that
you
can
split
this
message
in
in
two
blocks
and
that
would
be
likely
because
it's
it's
likely
that
you
can
do
that,
since
you
probably
would
be
not
very
much
over
the
limit,
since
a
itself
is
below
the
limit.
It's
just.
I
When
you
add
the
adopted
three,
you
go
over
the
limit.
So,
however,
I
mean
like
even
in
in
this
case,
you
would
still
be
forced
to
use
the
add-on
plus
oscar
request
and
inner
block-wise.
So
it's
just
like
your
yeah.
Let
me
just
continue
actually,
I
think
it
continues
here
on
the
next
slide,
so
yeah
just
kind
of
the
main
takeaway
when
inner
block
price
is
used.
I
Dr
mark's
workflow
will
yield
less
round
trips
and,
as
I
said
in
the
previous
slide,
you
have
this
corner
case
where
the
optimized
workflow
cries
clockwise,
but
the
original
workflow
does
not
require
inner
clockwise.
So
again
the
optimized
workflow
is
still
it's.
It
can
be.
I
mean,
depending
on
the
blocks,
that
you
shoot.
This
can
be
not
worse,
it
can
be
actually
better
to
or
equal
it's
just.
The
problem
is
that
you
wouldn't
really
see
in
this
situation.
In
this
corner
case.
I
You
wouldn't
really
see
an
advantage
in
terms
of
round
trips,
and
you
just
have
these
extra
processing
steps
of
being
forced
to
use
the
combined
request
and
also
consider
that
you
have
to
use
clockwise
and
if
the
end
result
is
you
know
the
same
amount
of
round
trips,
there's
really
not
a
strong
or
a
good
reason.
I
I
would
say
to
use
the
optimize
request
in
this
particular
corner
case
again,
when
you
could
have
sent
that
message
without
block
wise,
but
the
simple
fact
that
you're
using
the
optimize
workflow
forces
you
to
use
block
wise
in
that
particular
corner
case.
We
suggest
that
the
client
should
not
use
the
optimize
workflow
since
yeah.
You
end
up
with
the
same
round
chips
as
just
a
vanilla
workflow,
but
you
have
to
use
block
quests
and
the
optimize
frequency
processing.
I
I
E
I
Block
wise,
essentially
in
a
block-wise
that
is
just
for
some
kind
of
guidelines
and
recommendations
on
what
you
should
do
another
point.
I
Well,
we
should
now
revise
and
simplify
the
text
about
those
core
and
the
doc
identifiers,
because
there
has
been
some
major
changes
recently
in
that
doctrine
about
identifiers
and
now
they
are
intrinsically
or
only
silver
byte
strings,
so
that
changes
the
logic
that
we
have
currently
actually
simplifies
it
a
lot,
and
we
want
to
cover
a
little
bit
more,
the
actually
use
of
the
ui
compression
option
once
that's
available,
because
they
could
also
fit
quite
nicely
yeah
some
additions
on
the
security
considerations
and
yes
is
a
status
or
a
node.
I
Here
we
do
have
a
running
code
for
this
built
for
eclipse,
californium,
that's
written
in
java,
and
that
implements
I
mean
edoc
itself
based
on
the
version
12
draft,
and
it
also
implements
this
optimized
request
model
and,
of
course
this
will
be
updated,
along
with
updates
to
that
draft
and
up
text
updates
to
this.
This
document
itself
also
that
I'm
presenting
now
and
yeah
any
comments
and
reviews
are
very
welcome
on
this
document.
I
F
If
I
understood
things
correctly,
yep
on
the
topic
of
the
no
more
than
one
outstanding
interaction,
yes
is
this
is
this
is
just
a
recommendation
right,
so
it's
it's
a
normative
should,
but
not,
but
not
ruling
it
out
right,
because
I
can
can.
If,
if
someone
has
an
n
start
has
arrived
at
an
n
start
greater
than
one,
this
might
be
something
they
kind
of.
I
I
believe
it's
not
forbidden
well
yeah,
I
was
gonna
say
like
currently.
I
believe
we
in
fact
do
have,
must
not
that's
the
language
we
use,
so
it's
a
must
not
must
not
have
more
than
one
simultaneous
outstanding
interaction.
F
Yeah
the
thing
is,
the
server
can't
rely
on
the
client
not
doing
it
anyway,
so
the
server
has
to
do
a
check
for
well
to
not
process
the
same
message
three
again
so
yeah,
that's.
If
that
already
needs
to
be
there,
it's
just
a
matter
of
keeping
things
streamlined
and
not
a
matter
of
security
or
interoperability.
I
Yeah,
I
see
your
point,
I
mean
the
way
we
wrote
it
now,
what's
really
like
as
a
requirement
on
the
client
to
not
do
this
but
fair
enough
like
if,
if
at
the
end
of
the
day
as
a
server
you
well,
you
can't
control
the
design
dust.
So
you
have
to
be
ready,
regardless
of
of
this
possibility
happening
so
yeah.
That's
that's
something!
Hopefully
we
can
consider
to
mild
in
that
language
and
not
have
it
as
a
strict
must
not
yeah.
Thanks
for
the
feedback.