►
From YouTube: SIDROPS WG Interim Meeting, 2020-04-28
Description
SIDROPS WG Interim Meeting, 2020-04-28
B
A
A
A
A
C
C
The
option
to
the
MCC
manifest
options
here
is
an
option
they
happen,
love
it.
The
RIAA
ours
will
learn
that
responsibility
is
at
the
top
of
an
apex.
Is
tough
they're
stepping
up
to
it.
You
have
to
that's
better
they've
been
obsolete,
cheese
for
every
RI.
Are
there
be
not
citizens
for
every
child
CA
their
options,
but
I'm
not
really
about
it?
We
love
them
anyway.
C
D
So
this
is
Russ
I
think
that
we
know
that
the
CRO
needs
to
be
put
out
according
to
the
CP,
for
few
reasons
that
one,
a
internet
numbering
resources,
balloon
or
two,
the
key
is
cannot
Vermont.
So
we
need
to
have
these
and
we
just
need
to
make
sure
that
everything
is
consistent
at
the
time
of
publication.
I
think
the
example
that
the
Randy
gave
usually
will
be
announced
as
a
good
one,
just
develop
the
tools
to
make
sure
things
are
consistent
before
you
in
trouble.
F
Don't
want
this
is
Rob
manifests.
Have
this
one
extra
gotcha,
it's
just
a
technical
thing.
You
have
to
make
sure
at
one
point
when
we
started
this,
people
were
using
very
short
signature
lifetimes
or
the
yeast
hurts
and
manifests,
and
that
was
a
serious
drop.
It
made
manifest
our
bail
way.
Certificate
does
rather
than
go
stale.
The
way
al
does
think
at
this
point.
Everybody
has
got
met
memo,
but
if
anybody
has
it
it's
important
to
use
a
long
signature
a
lifetime,
the
manifest.
C
G
Hey
speaker,
can
you
hear
me
now
yeah,
okay,
so
yes,
I
agree
that
we
need
more
tooling,
and
you
know
this
needs
to
be
more
stable,
but
I'm
not
entirely
sure
that
the
comparison
to
the
DNS
root
is
a
good
analogy,
because
I
think
that
the
rate
of
change
on
that
is
much
lower
and
so
I
think
it's
more
feasible
to
do
some
additional
sort
of
manual
checking.
G
H
So
this
can't,
in
response
to
what
Orin
noted
I
agree,
that
the
frequency
of
change
for
manifests
that
has
been
adopted
in
the
rpki
is
is
fairly
frequent.
And
so
one
has
to
balance
the
ability
to
test
that
against
the
operational
requirements
of
rolling
them
as
quickly
as
people
have
decided
they
wish
to.
But
I
also
think
that
the
kind
of
tests
that.
H
Oops,
a
little
whoops
there
for
a
moment
that
we
were
talking
about
are
principally
automated
so
and
Russ
can
tell
me
if
that's
not
what
he
had
in
mind,
but
I
think.
The
notion
here
is
to
have
a
set
software
that
you
can
run
over
all
the
products
that
are
published,
updated,
etc
and
then
just
go
ahead
and
run
that
before
pushing
it
out.
I
think
that's
what
Russ
had
in
mind
and
that
certainly
yes,.
H
So
it
would
basically
be
a
form
of
relying
party
software,
but
perhaps
a
little
better,
documented
in
terms
of
better
document
but
better
at
providing
your
feedback
to
warn
the
CA
or
the
publication
point
maintainer
of
the
repository
operator
of
any
potential
inconsistencies
before
pushing
out
the
new
set
of
options.
That's
my.
D
So
Andy
I
actually
believe
there's
two
sets
of
tests,
one
run
by
ICANN
and
then
another
set
run
by
Verisign
and
I.
Don't
think
we
need
to
have
the
two
roll.
You
know
generator
publisher
thing
going
on
here,
but
I
do
think.
Each
time
we
have
some
experience
like
this.
The
set
of
tests
that
is
run
should
grow.
C
H
H
Well,
when
we
got
to
that
part,
we
struggled
with
it
in
all
honesty,
because
there
were
trade-offs.
It
push
to
be
very
strict
about
some
things.
We
were
especially
concerned
that
this
resulting
potentially
service
of
a
particular
form.
That
is
a
relying
party,
discarding
lots
of
stuff,
because
there
was
some
issue
with
the
manifest
in
terms
of
what
was
present.
What
was
missing
at
seven,
so
meaning
provided
trade-offs
when
tried
to
discuss
what
things
were,
but
recognized
that
it
was
going
to
happen.
I
H
In
the
rpki,
by
relying
parties
to
figure
out
what
would
have
to
be
done
venture
so
I
completely
agree
with
Brandon's
observation
that
it's
now
time
to
go
back
and
produce
the
new
RFC.
There
I'd
like
to
believe
that
everything
on
two
to
Section,
six,
okay,
certainly
open
funny
how
things
change
elsewhere,
but
mostly
to
go
back
based
on
discussions
in
working
to
say.
H
Combinations
of
states
are
things
and,
with
that
in
mind,
find
a
way
forward
by
saying
well
we're
putting
mistake
now.
I
recognize
from
my
part
that
I
was
thinking
more
in
terms
of
traditional
security
concerns
about
my
service
and
I.
Think
George
Michaelson
on
the
list
has
made
a
very
persuasive
argument
that
the
real
concern
here
is
that,
if
there's
an
indication
based
on
a
current
manifest
that
certain
objects,
that
should
be
at
a
publication
point
missing.
H
C
H
D
E
C
C
C
J
Okay,
can
people
hear
me
this
is
Tim,
there's
also
known
that
laughs
so
yeah
when
I
said
in
the
chat,
is
that
I
think
it
will
help
if
we
moved
to
a
publication
model
using
O
as
all
proposed
later
RDP,
because
it
will
allow
for
the
authors
to
be
published
atomically
so
I
think
you
can
prevent
a
lot
of
the
issues
seen
around
glitches
with
a
serial
and
manifests
and
objects
are
inconsistent.
If
you
will,
it
leaves
other
things
to
be
considered,
obviously
help.
C
K
My
my
comment
in
chat
is
that
I
think
if
we
are
focused
on
section
6,
then
specifically
section
6
point
5.
If
the
last
paragraph
represents
a
problem,
it
only
has
one
normative
word:
it's
a
should
result
in
a
warning
and
I
think
what
we
are
going
to
need
is
much
stricter
language.
I
think
that
you
are
correct
to
focus
on
section
six
and
I
think
the
tabular
construction
of
the
cohesion
of
different
states
of
prior
state,
manifest
CRL
current
state
and
a
decision
logic
that
is
less
subjective
is
probably
the
outcome.
H
Yes,
two
observations,
I
agree
with
George's
observation
that
65
issue
here
and
that
we
need
to
look
throughout
the
whole
section,
but
especially
there
to
try
and
use
more
normative
language
and
warnings
in
a
sense
that
I
get
from
what
I
hear
our
readers
have
expressed.
Is
that
mornings
may
not
always
be
it
not?
They
are
specific
guidance
about
what
to
accept
what
to
reject,
and
we
should
just
do
that
forward.
D
Russ,
in
particular,
if
new
objects
were
put
in
a
staging
area,
validation
tools
were
run
against
them,
and
then
the
staging
area
was
transitioned
to
the
place
where
arsenic
sees
it.
I
think
that
kind
of
a
pipeline
would
allow
our
sink.
You
work,
that's
not
to
say,
I'm,
opposed
to
what
Tim's
proposing
but
I.
Don't
think
it's
the
core
of
the
problem
for
similar
reasons
to
what
Steve
just
said,
I
think.
C
F
F
C
C
C
K
I
think
it's
important
that
we
recognize
an
aspect
of
the
distinction
between
positive
and
negative
statements.
All
all
cryptographic
statements
are,
in
effect
only
positive
I.
Want
you
to
believe
me,
here's
my
sig,
you
don't
have
the
and
he
said
of
I.
Want
you
to
believe
me.
Here's
my
crap
sake.
You
only
have
is
my
good
sake,
so
it
is
always
a
positive
cryptographic
test.
Is
this
a
valid
statement
and
it
invites
the
necessity
of
having
the
ability
to
make
negative
statements
now.
K
Randy,
you
famously
said:
stop
inventing
things
when
we
try
to
invoke
creating
negative
statements
where
the
negative
was
a
negative
assertion
routing.
So
we
only
have
positive
assertions
at
this
stage
as
the
targets
that
get
made
in
our
system.
Cr,
ELLs,
CR
ELLs,
are
a
negative
assertion.
Do
not
accept
this
otherwise
valid
thing
and
I
think
that
is
completely
distinct
from
what
a
manifest
as
a
man,
if
does
not
have
a
list
of
things
not
to
be
accepted,
it
is
only
a
positive
statement
of
acceptance
of
things.
K
C
F
J
Yes,
so
when
I
I've
been
making
noises
about,
URLs
I
was
worried,
in
particular
about
inconsistencies
of
you
know,
even
if
you
list
off
in
a
new
directory
and
change
over,
maybe
race
conditions
with
validators
getting
things.
That
was
my
main
concern
that
there's
not
this
circular
thing.
When
you
need
to
mount
validate
a
manifest
you
need
to
validate.
You
need
to
find
its
crl.
You
need
to
verify
that
the
manifest
itself
is
not
revoked
by
that
crl
and
then
continue
and
while
I
do
see
great
value,
who
CR
else
in
general.
J
The
point
I
was
trying
to
make
is
that
for
things
that
are
already
on
the
manifest?
If
and
only
if
we
accept
that
that's
the
sign
remembers,
it
doesn't
make
sense
to
publish
something
on
a
manifest
and
then
also
revoke
it
in
a
cereal.
All
that
being
said
and
done,
I
can
live
with
the
cereals,
especially
because
I
think
they
have
purposes
outside
of
was
published
inside
the
arbitrary
repository
and
they're
made
in
use
cases.
J
There
see
the
are
PSL
signature
document,
for
example,
any
case
and
I
may
be
taking
too
much
sign
here,
but
my
biggest
concern
here
is
that,
with
all
the
things
in
section
6
of
manifest
I
think
if
that
gets
addressed,
most
of
my
issues
are
addressed
is
the
the
all
the
local
policy
in
terms
of
what
we
do.
We
do
it
yeah.
That
makes
me
nervous,
so
I
hope
that
this
makes
it
a
bit
more
clear
around
come
from.
C
H
E
A
point
about
the
concept
of
local
policy
I
think
maybe
this
is
a
misunderstanding
in
in
my
mind,
or
maybe
in
the
broader
self,
but
in
the
last
few
weeks,
having
deployed
some
our
PTI
stuff
in
a
network.
The
concept
of
local
policy
for
for
us
is
expressed
through
slurm,
that
is
the
equivalent
of
local
policy,
and
this
concept
of
having
local
policy
in
the
validation
process
itself
may
not
really
exist.
E
So
if
we
revise
some
of
the
language
around,
this
I
think
the
words
the
phrase
local
policy
should
perhaps
the
old
methods
will
remove,
because
local
policy
in
context
of
router's
for
our
separate
operators
is
what
we
put
into
the
slurm
file
or
what
we
typed
into
the
router,
but
not
alle
nation
policy.
Does
that
make
sense.
H
E
C
C
C
K
F
F
All
the
certificates
circuits
in
the
manifest
you
know
what
we
write
509
that
junk
as
a
separate
issue.
I
think
it
would
be
extremely
silly
to
publish
a
crl
and
manifest
pair
where
the
CRL
evokes
the
manifest.
It
would
be
dumb
thing
to
do
so.
Don't
do
that
distinction
between
that
and
whether
or
not.
A
J
No
sorry
about
that!
Well,
okay,.
J
I
think
it's
really
good
to
look
at
checking
consistency
in
the
future
in
future,
work
that
we
do
and
I
think
a
possible
locus
for
that
is
actually
on
a
publication
server
where
stuff
is
being
sent
to
I.
Think
you
can
add
checks
there.
That
deltas
are
consistent.
That
manifests
don't
revoke
themselves
in
the
CRL
that
they
say
is
current,
at
cetera.
C
C
C
F
C
C
C
C
C
The
CA
has
to
both
publish
it
me
up
down
protocol
and
publish
it.
They
honor
that
should
take
seconds
to
a
minute.
Next,
the
relying
parties
have
to
gather
our
sink
is
heavy
on
the
server.
Therefore,
once
an
hour
seems
reasonable
for
compromise,
our
RDP
is
much
lighter.
Ten
minutes
seizures
the
book.
All
these
constants
are
what
we're
going
to
discuss
in
the
next
six
months
as
we
process
this
dream
right.
C
C
C
C
J
Okay-
yes
right,
thanks
ask
for
this
nice,
so
I
want
to
do
a
quick
talk
about
deprecating
our
sink
and
moving
to
our
DP
I
made
a
document
that
I
presented
in
the
last
face-to-face
meeting
and
since
then,
I
I
had
discussion
with
Randy
and
George
and
they've
come
on
board
as
well.
This
is
not
a
working
document,
yes
or
they
spoiler.
Is
that
how
we'll
ask
that
formally
on
the
list
later?
I
click,
please
annex!
So
what
is
the
goal
here?
J
Why
I
don't
want
to
spend
too
much
time
on
this
or
less
people
know
discuss
more.
We
can
do
so,
I
think
it's
a
great
tool
but
I
think
it's.
It
can
be
heavy
on
the
server
side,
especially
when
the
relationship
is
ace
and
that's
regular
as
many
clients
to
the
server,
and
this
can
lead
to
delays
of
RTI
content,
making
it
to
Reuters
from
the
validations
of
her
point
of
view.
Arcing
libraries
are
lacking.
J
J
So
next,
please
RDP
is
I
was
proposed
as
an
alternative
or
an
additional
thing,
and
to
begin
with,
it
allows
for
scaling
with
with
CDMS
HTTP
client
libraries
exist
in
almost
all
languages
and
it
deals
with
delta
link.
There
are
constraints,
though,
if
we
say
we
want
to
move
away
from
our
sink.
We
have
our
sink.
We
have
to
deal
with
this,
and
things
have
to
keep
working.
We
cannot,
just
you
know,
take
all
the
wheels
of
the
bus
and
let
it
crash.
We
have
to
do
this.
It
is
carefully
so
next,
please.
J
J
J
Well
in
Texas,
well,
as
them
ask
well
make
it
a
must
for
relying
parties
to
use
our
ADP
and
even
go
further
and
say
that
they
must
not
use
icing.
This
might
go
a
bit
far
for
some
people,
but
the
reasoning
behind
that
is
that
if
we
want
to
remove
the
operational
obligation
to
run
an
or
sync
server
and
I
think
you
have
to
do
this.
The
other
side
to
this
is
you
need
to
do
measurements.
Of
course,
you
need
to
know
where
our
relying
part
is
not
only
about
a
relying
party.
J
J
At
the
end,
you
can
say
well:
repositories
must
support
RDP
of
maze,
support
arcing,
so
that
would
remove
the
operational
obligation
to
run
a
sync.
Although
I
do
see
your
use,
we're
still
running
our
sync
and
if
it's
not
like
you
find
it
worse
here,
it's
not
that
hard
to
run
an
arcing
server
that
people
can
access
to
look
at
objects,
but
it's
much
harder
to
guarantee
that
it's
available
as
a
highly.
J
You
know
how
you
the
ability
service
and
that's
what
I'm
worried
about
so
with
regards
to
validation,
I,
think
it
would
be
good
and
validators
did
not
depend
on
it,
but
it
doesn't
mean
that
people
cannot
run
an
arcing
server.
I
would
think
that
that's
not
that
hard
to
do.
If
you
don't
have
that
constraint
that
you
know
everybody
validating
must
be
able
to
access
it
all
the
time.
So
that's
why
I
like
phases
as
a
may
for
a
moment.
J
Obviously
this
is
introduction
to
discussion,
hopefully
more
discussion
later,
which
brings
me
to
the
ants
and
he
does
one
more
slide
or
two
yeah.
So
this
is
just
a
recap
of
what
I
just
said:
let's
not
spend
more
time
here
now
might
be
useful
for
discussion
and
next
I
think
that's
all
I
had
yeah
process,
so
I
think
it
would
be
good
to
have
a
discussion
about
this.
J
J
This
may
not
be
the
best
way
to
do
things,
but
I
think
it
helps
to
keep
everything
together
to
discuss
at
this
stage,
but
later
when
we
may
well
find
that
it's
better
to
separate
out
these
documents
into
a
plan
ends
different
documents,
updating
our
seats
for
different
phases,
but
you
know-
that's
all
I
had
to
say
about
this
for
now
so
I'm
happy
to
discuss
things
here
now
on
this
or
later
on
the
list.
I.
K
So
my
my
question
is
properly
not
material.
It's
just
the
observation.
Do
we
want
the
formalism
of
the
ordering
of
the
fetch
you
our
eyes
in
the
publish
dais
and
one
because
I,
don't
personally,
believe
it's
high
likelihood
that
the
order
of
presentation
in
the
asn.1
affects
the
decision
logic
in
a
relying
party
validator
or
about
how
it
fetches,
but
the
prior
experience
and
the
DNS
is
order
of
presentation
of
data
elements
actually
does
affect
what
people
do.
J
Few
nice
about
that,
yes,
well
essentially,
our
DP
has
its
own
object
identifier,
it's
just
different
access
methods.
Essentially,
so
it's
not
an
ordering
of
schemes,
but
if
the
ordering
of
these
things,
you
know
I,
don't
know
not
sure
if
I
almost
hit
your
question
correctly,
but
I'm
advocating
a
change,
see
using
the
Euler
method,
I.
K
K
You
are
eyes,
I,
believe
it's
a
set,
a
set
or
sequence,
doesn't
make
much
difference,
but
the
question
was:
does
the
order
of
the
presentation
of
these
oh
I,
D
denoted
objects
matter
and
my
gut
feel
is
no,
because
the
direction
to
the
RP
is
if
you're,
given
both
please
use
our
RDP
first
problem
for
a
relying
party.
Is
that
when
fetch
fails,
what
do
you
do
and
as
an
operator
of
a
publication
point
I
can
tell
you
I
see
the
same
IP
addresses
doing
both
protocols.
F
Sorry,
Robert
I'm,
just
cutting
it
for
a
clarification,
George
Tim
actually
told
you
I,
think
I
may
have
been
obscured
there.
It's
not
actually
a
list
because
the
implementation
details
in
terms
of
the
s
and
one
it's
actually
two
separate
databases,
a
database,
but
there's
actually
two
separate
buckets
of
where
the
your
eyes
are.
They
have
different
nodes.
So
as
a
practical
matter,
all
the
implementations
I
know
about
treat
them
separately
and
are
going
to
treat
them
as
an
ordered
list
based
on
the
textures
I.
Don't
think
it
matters
there.
B
J
B
Okay,
Tim
I
might
be
stretching
this
here
a
bit,
but
if
the
draft
name
is
deprecating
our
single
and
when
I
look
at
this
slide
and
I
see,
repositories
are
saying
optional
and
relying
parties
are
saying
none.
I
would
like
to
opt
already
for
the
last
phase
to
be
our
shank
repositories,
none
because
we
all
know
how
it
goes
when
you
have
to
maintain
stuff.
If,
if
you
think
it's
optional,
people
will
expect
it
and
I
would
then,
in
that
case
that's
very
stage.
J
Well,
I
I'm
not
completely
decided
on
on
this.
To
be
honest,
what
seems
more
most
important
to
me
is
that
you
don't
have
to
have
a
awesome
repository
available
all
the
time
for
validation
to
work
and
that
when
Ryan,
writing
validation,
software,
you
don't
have
to
sorry,
have
to
support
I.
Think
because
it's
an
additional
quote
path
that
may
depend
on
things
being
installed
that
you
don't
have
control
over.
That
was
my
mind
about
it.
J
That
being
said,
I'm
not
against
awesome
repositories
being
available,
especially
for
other
purposes,
but
yeah
I
think
this
is
a
question
to
be
discussed.
I
mean
if
all
we
end
up
doing,
is
that
now
that
that
are
now
two.
A
L
J
Yeah
I
one
comment
about
that,
but
so
great
to
have
this
discussion
looking
forward
to
more
online
I
guess
part
of
em
come
from
is
that
there
are
arcing
your
eyes
defines
in
the
standards
and
they're
really
convenience
because
objects
when
they
have
names,
it's
much
easier
to
talk
about
them
and
what's
going
on
and
if
there's
nothing
available
at
all
that
might
raise
some
eyebrows,
although
there
are
parallels
with
XML
where
namespaces
are
defined
with
its
yes,
you
are
eyes
and
there
may
not
be
anything
there.
J
I
just
think
it's
convenient
if
you
can
get
an
object.
All
that's
being
said
and
done
e.
Your
eyes
are
included
in
our
IDP
and
if
you
pass
the
XML
and
you
pass
the
basic
C
core
in
it,
you
can
find
all
these
objects
as
well.
It's
just
a
little
bit
more
work
and
with
that
I'll
just
pass
on
to
the
next
person,
there's
still
anybody
else,
I'm
Chris,
you
have
him.
M
I
guess
you
can
hear
me:
okay
cool,
so
let's
just
move
to
the
next
slide
and
we're
here
we
to
talk
about
this,
which
is
not
any
cyber
obstructs,
and
we
just
just
resumed
work
on
the
scones
and
similar
work
is
being
performed
here
inside
drops
with
a
spell
yes
PA.
So
it's
also
good
to
get
feedback
from
other
working
groups.
We've
had
I
think
in
the
last
couple
of
days
series
of
feedback
coming
in
from
this
working
group,
but
we
didn't
get
from
row.
So
what
has
happened
next,
please
so
I'll
be
quick
here.
M
What
has
happened?
We
have
a
new
additional
author.
We
have
mouth
here
was
joined
me
and
yoke.
We
have
looked
at
the
security
model
because
we
were
told
that
that
was
the
culprit
of
we
had
to
work
on
and
together
with
that,
we
have
looked
at
a
couple
of
new
models
for
building
prefix
lists
so
for
actually
using
them
das
cons.
So
what
does
this
mean?
So,
let's
look
at
the
security
model
next
slide.
M
Basically,
when
you
add
a
nice
calm
to
another
in
scone,
this
requires
an
acknowledgement
if
the
folder,
the
owner
of,
is
cone,
we're
trying
to
add
to
your
it
is
cone,
doesn't
acknowledge
it.
This
is
not
a
visible
change,
so
this
avoids
anyone
adding
anything
random
to
their
own
customer
cone,
which
means
we
we're
going
to
add
a
little
bit
more
security
to
what
we
have
now
in
assets.
M
On
the
other
hand,
if
you
add
an
ASM
only
to
a
nice
cone,
the
acknowledgement
is
optional,
and
why
do
we
have
this?
This
is
because
it
needs
to
keep
it
simple
for
those
stations
Internet
where
they
don't
have
to
do
anything,
and
they
might
not
even
know
that
they
have
something
the
attachment
is
registered
in
the
a
scone
as
a
boolean
value
being
validated
field
for
each
G,
so
we
have
a
value
of
the
kinesio
or
one
and
based
on
this,
we
build
a
way
to
build
filters
in
the
next
slide.
M
Please,
let's
see
how
this
turns
out
to
be
okay,
so
you
get
basically
loose
that
is
B.
You
have
four
ways
of
building
the
prefix
lists.
Those
opportunistic
almost
straight
and
straight
for
loose,
we
get
any
ASM
any
a
scone
in
the
alias
cone
indicated
by
your
downstream.
So
you
get
everything
in
opportunistic.
You
get
any
ASM
and
any
a
scone,
but
for
the
a
a
scope
for
the
a
essence,
you
only
pick
the
ones
that
have
validated
field
set
one,
so
the
ones
that
are
valid
right.
M
M
You
walk
the
three
of
your
of
your
customer
cone
and
you
once
you
find
a
an
entry
that
is
not
operated
entire
subtree,
which
means
you
basically
punish
that
customer
over
sub
customers
so
that
they
go
back.
You
could
go
back
and
say
their
customers
to
validate
their
entries
and
then
strict
this,
the
one
where
you
only
consider
an
a
scone
if
each
very
intriguing
the
thesis
these
are
recent
additions
that
we
get
to
being
draft.
If
you
are
interested
in
this
I
would
suggest
going
a
book,
but
why
am
I
also
presenting
here?
M
A
M
There
are,
there
are
possibilities
of
integrating
this
with
ESP
a
husband.
Are
the
two
ideas?
Have
a
lot
in
common?
One
looks
one
way
the
relationships,
the
other
one
looks
the
other
way
around,
and
maybe
it
could
be
worth
considering
unifying
them
or
just
building
something
to
make
the
two
systems
interoperable
and
the
reasoning
behind
this
is
also
a
while
implementations
need
to
be
made
at
one
point
for
one
or
the
other.
M
N
Okay,
exams
emif,
so
I
will
not
address
to
the
last
light,
we're
speaking
about
ESP
integration,
because
it's
a
broad
equation
and
we
can
discuss
fine.
My
main
question
is
about
these
idea
of
some
kind
of
inheritance
between
our
high
schools.
So
let's
say
that
I
have
a
customer,
and
this
summer
is
another
customers
sequence
of
three
and
what
should
I
do
this
second
level
customer
have
an
ice
cone
and
this
is
not
verified
by
my
direct
customer.
N
The
problem
of
the
of
the
inclusion
of
ice,
cones
or
high
sets
so
in
what
I
know
about
the
current
I
said
deployment
it's
getting
worse
and
worse
when
we
are
going
from
T
ones
to
the
age,
and
there
is
little
chance
that
you
will
be
able
to
fix
the
8
and
from
the
age
there
will
be
more
errors,
and
so
you
will
be
adjust,
learning
and
getting
a
broken
ice
cone
as
you
are
getting
now
broken.
Isis.
M
M
M
And
at
that
point
they
should
know
they
have
to
basically
well
their
air
transit
would
include
them
in
in
their
cone
and
mending.
It
would
acknowledge
that
they
would
have
to
acknowledge
that
so
in
the
idea
is
that
they
would
get
the
sort
of
a
notification
by
an
email
or
something
else
from
their
from
from
the
area
where
the
of
the
acecomm
comes
from
and
they
have
to
go
in
and
check
and
check
the
box.
M
N
Just
to
summarize,
my
concern
is
that
the
whole
difference
between
high
school
approach
in
the
PSP
approach
is
that
in
ASB
the
relying
party
is
your
provider
and
with
some
extent
you
can
believe
to
be
a
provider
in
high
school
you're.
Relying
party
is
a
spoon
of
your
customers,
I,
don't
believe
in
such
a
lame
party,
but
we
can
discuss
it.
Maybe
I'm
wrong
and
it's
my
misunderstood,
I
think.
H
H
Yes,
so
what
concerns
me
is
the
term
validated
corresponding
data
structure.
This
seems
to
be
a
validation.
That's
asserted
by
the
entity
publishing
this
object,
but
is
a
relying
party
and
you'll
be
trusted
to
correctly
inserted
that
it's
been
validated
I
think
and
that
concerns
me
a
lot.
What
I
tried
to
do
throughout
the
rpki
is
to
avoid
circumstances
very
impossible,
where
any
operator
can
make
an
assertion
that
can't
be
appropriately
validated
verified.
Let
me
say
using
the
strict,
hierarchic
assignments
for
a
SMS
address
space.
M
H
The
yeah-
that's
like
sir
okay,
you're,
you're,
mixing
a
combination
of
an
activity
by
an
R
I
are
with
another
database
into
the
RPK
I.
Think
for
this
to
be
viewed
as
secure
a
different
approach
taken
propagate
this
information
to
the
fashion
that
all
relying
parties
can
make
use
of
just
an
observation.
Okay,
I'm.
M
E
Joke
Snider's
and
TP:
did
you
go
to
your
class?
Can
you
go
to
the
last
slide
yeah
so
about
integrating
the
two
in
my
mind,
yes,
cones
is:
let's
call
it
a
hot
experiment
to
see
if
we
can
migrate
some
functionality
that
exists
in
the
irr,
but
not
in
the
rpi,
from
the
ir
to
the
RV
guy,
because
I
accept
commonly
used
now.
Aspa,
on
the
other
hand,
is
an
attempt
to
automate
peer
lock
of
configurations.
E
That's
what
I
hope
to
get
out
of
it
and
I
think
those
two
serve
very
different
roles
of
different
applications,
different
positions
in
how
we
construct
filters
in
each
two
sessions,
so
I
I'm
not
seeing
an
integration
path,
given
that
the
purpose
and
intent
of
the
two
approaches
is
very
different
and
I
also
don't
think
it
would
reduce
work
in
any
meaningful
sense
to
combine
the
efforts
so
I.
My
preference
would
be
to
keep
the
two
efforts
separate
and
just
see
how
things
go
from
there.
M
K
K
Rowers
are
only
signed
by
address
holders.
They
are
not
signed
by
a
sin.
So
if
your
intent
is
to
make
an
assertion
that
is
checkable
against
the
rights
of
enhancing
holder,
the
question
that
I
put
to
you
at
the
time
and
I'd
repeat
here
is
who
designs,
because
I
believe
your
policy
definition
component
is
not
a
policy
statement
of
the
address
holder.
It's
a
policy
assertion
of
Vass,
and
so
you
probably
need
to
modify
the
semantics
intent
behind
signing
to
clarify
who
signs.
A
L
L
Essentially,
the
a
s
cone
objects,
don't
I
themselves,
don't
have
the
validated
bits,
but
invalidated
bits
are
collected
by
looking
at
VA
s
pas
that
are
confirming
which
are
the
client
a
SS
of
a
transit
provider.
But
that
does
not.
That
does
not
look
like
something.
That's
going
to
fly
nicely.
That
looks
like
a
plane
designed
like
a
pig
and
yes,
we
know
pigs
can
fly.
We
just
have
to
apply
sufficient
trust
thanks.
C
So
on
and
so
forth,
and
so
we
really
would
like
a
fainting
answer,
but
those
of
us
who
live
in
small
bits
of
security
who
do
not
understand
the
authentication
model
here,
never
have
even
that
keeps
changing.
So
one
way
the
Polka
Dot
is
that
issue
in
today's
presentation.
Is
you
have
a
number
of
parties
each
acting
through
a
number
of
pages
to
construct
approval?
C
M
M
We
have,
we
have
thought
about
that
and
you
can
remove
yourself
from
another,
your
your
a
scone
from
another,
a
stone
you
can
do
that
and
a
holder
can
remove
you
as
well,
so
have
ever
have
a
look
at
the
latest
draft
at
the
zero
to
version
and
you'll
see
that
we
have
thought
about
that.
There
is
wording
about
that.
C
N
N
As
you
may
know,
the
original
ace
be
followed
raw
design.
It
had
multiple
records
for
each
customer
tunnel
system
which
together
form
the
set
of
candidates
for
their
educational
procedure,
to
prevent
possible
sync
resist
synchronization
issues.
We
changed
semantics
of
this
bayaud,
so
that's
a
record,
we'll
have
a
sequence
of
provider
and
there
must
be
only
one
record,
their
custom
other
own
system.
These
guarantees
are
done,
atomicity
updates
in
the
rear.
The
second
part
of
this
atomicity
is
a
RTR
protocol.
Next.
N
They
suggested
ASP
video
directly
reflects
the
object.
So
when
we
have
an
update
of
a
selected
element
system,
it
replaces
the
stored
record
both
in
the
cache
and
in
the
route.
The
non-rich
condition
may
happen,
there're
a
couple
of
questions
that
haven't
been
resolved,
yet
how
first
one
how
we
should
work
with
default.
Free
networks
such
as
he
wants.
There
are
two
possible
solutions
for
such
a
scenario.
We
main
follow
rho0
style
or
use
empty
set.
N
Even
for
myself.
I
have
no
preference
in
this
matter.
Rain
Devils
for
the
empty
set.
If
nobody
will
stand
up
and
shout
that
they
want
to
keep
with
the
sb0
Randi
will
win
another
and
more
important
question.
Should
we
have
different
spear
records
or
ipv4
and
ipv6
to
give
a
proper
answer,
I
made
a
small
research
to
study
the
difference
between
customers
in
ipv4
and
ipv6
next
place.
N
So
how
I
did
it
I
look?
I
took
a
known
set
of
key
ones
which
have
even
connections
with
each
other.
If
I
saw
a
path
with
Duty
ones,
present
I
can
see
that
all
right
most
paths
as
a
upstream
path.
If
I
saw
a
single
tier
one
in
the
path.
The
result
was
nearly
the
same,
with
the
exception
that
three,
no
guess
about
the
link
between
as
in
this
drawing
between
a1
and
d1,
they
applaud
a
flow
path,
should
consist
of
a
customer
to
provide
a
pairs.
But
there
is
some
noise
from
the
droplet.
N
Now,
evaluation,
the
basic
idea
was
to
check
algorithms
described
and
in
the
verification
drop
at
scale,
so
I
created
next
to
top
I
collected
beam,
P
data
from
border
routers
and
use
him
CCT
links
to
Paulo
to
pass
and
organize
the
flow
of
data.
In
addition,
I
created
18,
a
speed
records
we
should
represent
is
p0
for
well-known
T
once
and
also
I
specified
a
spare
records
or
Yandex
next.
N
And
here
is
an
example
of
what
we
got
on
the
drawing
you
can
see.
Margin
leaked
by
Vito
I
was
also
able
to
confirm
this
internship
incident
with
data
plane
monitoring
a
butter.
Drawing
is
example
of
another
kind.
There.
A
speed
logic
is
capable
to
detect
leaks
that
are
coming
from
providers.
In
addition,
being
P
feed
of
a
drip
in
gives
easy
access
to
path.
So
that
may
include
your
own
and
alternate
system
number
together.
It
gives
a
way
to
detect
leaks
that
are
happening
for
your
own
address
space.
N
N
So
where
we
are
now,
the
foundation
seems
to
be
ready.
We
have
three
documents
together
represent
ASP,
object,
verification
procedure
and
our
TRP
do.
There
are
a
few
questions
that
still
need
to
be
resolved,
but
I
don't
see
any
showstoppers.
We
have
implementation
on
top
of
god
and
now
large-scale
check
of
SB
logic.
It
was
done
inside
the
Yandex
Network.
It's
proof
to
have
where
you
began
to
original
expectations
value
it
sees
more
than
the
draft
apparently
I
feel
that,
after
some
work
on
the
text,
we
should
be
ready
for
working
group.
A
L
N
N
D
E
E
In
the
last
slide,
it
will
suggest
that
that
perhaps
things
should
move
towards
working
group,
but
I
would
like
to
ask
the
group
that
we
go
a
little
bit
slower.
As
it
currently
says,
this
working
group
has
significant
work
ahead
of
himself
to
improve
the
validation
strategy
around
manifest
and
CR
ELLs
and
from
a
time
perspective,
I
would
like
to
do
a
similar
evaluation
as
us
them
here
with
them
entities.
Contacts
will
be
able
to
support
the
documents
or
point
out
that
there
are
some
corner
cases
that
are
problematic,
I.
E
N
So
I
would
agree
that
we
may
need,
in
other
verification,
to
make
my
point.
Yeah
I've
left
not
skimmed
for
the
working
group
last
cold,
the
next
day,
T,
for
example,
you
have
time
to
check
of
the
logic
of
SPE
inside
your
network
rate.
You
can
give
us
some
other
guys
that
are
ready
will
be
even
better
the
more
the
more
testing
we
will
have
around
ASP
the
match
engines
which
we
will
not.
We
will
not
need
fixes
at
hawks
and
other.
E
C
C
N
I
Hi
hi
right
Alex,
so
I
have
a
question
which
I
had
when
I
was
reading
your
crafts,
which
is
not
quite
related
to
your
today's
presentation,
I'm,
not
sure
if
it's
been
asked
or
or
being
answered
so
I'm
wondering
how
do
you
represent
the
latter
appear
in
relation
using
you
know
the
SP
a
profile
or
object
object,
object,
I
mean.
Do
you
use
two
different
pairs
like
to
see
to
be
peers
pairs,
then
tape?
It
will
prepare.
You
just
don't
represent
a
p2p
relations.
F
J
I
N
No,
the
SBA
will
be
able
to
detect
such
miss
configuration
because
we
will
look
at
today
if
we
are
receiving
HIPAA
critics
from
my
peering
of
link.
We're
expecting
that
all
sequence
in
the
I
spot
represents
customer
to
provide
the
pairs
and
to
give
to
peer
links
are
bound
to
each
other
will
be
any
invalid
path
in
these
terms,
and
so
that's
why
it's
enough
to
use
only
customer
to
provide
us
registration.
The
direct
side,
change
configuration.
N
N
Yes,
it
will
be
expecting
that
out
and
system
one
and
out
and
system
two
is
custom
and
provide,
but
even
system
one
is
signing.
Sb
I
also
add
items.
Each
entry
will
not
be
included
in
the
list
because
they
appears,
and
using
this
data
out
on
system,
three
will
be
able
to
detect
the
leak
that
happens
for
address
space
that
belongs
to
out
on
sister
one.
I
O
N
So
for
a
partial
deployment,
there
are
two
scenarios.
First
scenario
is
when
there
is
a
mistake:
if
there
is
a
mistake,
as
I
described,
you
mean
to
go,
there
is
not
it's
not
a
big
deal,
because
if
we
have
here
a
signing
party
sound
waiting
in
the
path,
the
receiver
will
be
able
to
detect
early
mesquite.
N
If
we
are
speaking
about
malicious
activity,
it's
a
partial
deployment
to
secure
your
path.
You
need
CQ
up
flow
past
2,
T
1.
If
you
are
T
1,
the
situation
is
simple:
just
created,
which
is
now
called
a
zero
or
one
day
will
be
named
I
spare
a
SB
empty.
Something
like
if
you
are
you
to
secure
your
path.
You
need
make
your
upstream
providers
silence
parent
in
AD,
in
addition
to
your
own
experience
and
so
on,
42
year,
300,
infer
and
whatever,
but
in
the
real
world.
N
O
L
Well,
it's
the
choice
of
the
a
s
whether
they
want
to
protect
or
not
to
do
the
asp
a
or
not
and
yes,
for
having
the
full
path
text.
Of
course,
the
partial
deployment
does
not
protect
when
you
trends
does
not
protect
fully
when
you
transits
some
networks
that
do
not
do
ASP
a
and
and
protect
for
you,
none
I,
think
ASP.
A
as
it
stands
is
fine
and
very
helpful
for
protecting
peering
relations.
One
could
consider
doing
essentially
the
same
data
structure
as
ASP
a
for
part
for
a
SS
that
want
to
protect.
L
They
are
external
relations
being
tracked
in
a
s
paths
observed
elsewhere
by
doing
essentially
the
same
data
structure
and
listing
their
peers
that
were
they
authorized
the
appearance
of
the
peers
in
the
a
s
path
that
that
would
be
essentially
a
completely
different
rpki
assertion
than
the
ASP
a
it
would
just
be
structural
leaders
or
could
be
done
structurally
as
the
same
and
I
would
not.
I
would
not
suggest
going
there
until
we
have
actually
transferred
the
ASP
a
in
to
RFC
settles.
N
N
Okay,
okay
I
will
full
up
so
I
would
normally
like
to
highlight
the
signing
gives
him
another
kind
of
problem
and
it
the
problem
is
called
transparent
axis
when
you
have
a
lot
of
tears
when
you
happen
to
be
here,
to
help
and
to
be
a
director
tearing
important
to
the
eye
spot.
But
you
never
know
the
food.