►
From YouTube: GitHub Quick Reviews
Description
Powered by Restream https://restream.io/
A
B
All
right,
good
morning,
everybody
or
good
evening,
depending
on
from
where
you're
joining
us.
This
is
another
api
review,
and
today
we
hopefully
only
have
a
few
items
because
well
not
much
going
on.
We
have
most
of
our
red
items
addressed
and
that's
pretty
much
a
good
thing,
because
it
means
we
are
making
forward
progress
to
what
zero.
B
Maybe
today
we
can
get
everything
done,
but
one
the
utilization
abstraction
will
be
discussed
and
I
think
two
weeks
from
now
people
on
vacation,
I
would
suggest
we
just
jump
right
in
adam.
Do
you
want
to
talk
about
this
one?
B
C
The
problem
is
that
it
doesn't
work
when
you
have
multiple
producers
that
write
to
the
same
file
and
in
the
example
that
I
have
provided
over
here,
we
have
two
threads
that
are
basically
trying
to
append
data
to
the
same
file.
This
is
a
common
scenario
for
bloggers,
for
example,
where
we
have
multiple
modules
and
they
are
all
logging
something
to
a
single
text
file
and
the
solution
is
to
use
native
capabilities
exposed
by
every
os.
C
The
problem
is
that
each
os
allows
us
to
happen
only
to
the
end
of
file
whatever
it
is,
so
we
don't
seek.
We
don't
monitor
the
offset.
We
just
transferred
this
as
possibility
to
the
os.
However,
the
the
mismatch
with
the
current
implementation
for
four
five
mode
happened
is
that
the
current
implementation
allows
for
seeking.
C
So
we
open
the
file
and
we
try
to
up
it
at
the
end
of
the
file.
However,
we
also
allow
for
seeking,
so
you
can
open
the
file.
Let's
say
it's
10
bytes
write,
5
bytes
and
then
move
the
offset
to
byte
number
12
and
overwrite
the
content
again,
and
I
would
like
to
make
a
proposal
for
introducing
a
new
mode
called
append
atomic
which
does
not
allow
for
seeking
so
changing
the
offset
position
and
always
appends
only
to
the
end
of
file.
D
C
Yes,
so
so
that's
an
excellent
question
so
and
the
answer
is
tricky.
I
would
like
to
place
the
link
to
the
chat,
so
I
had
a
conversation
with
steven
trope
about
this
and
our
current
doc
45
mode
up
and
says
open
the
file
if
it
exists
and
seeks
to
the
end
of
file
or
creates
a
new
file,
and
this
requires
append
permission
trying
to
seek
to
a
position
before
the
end
of
file
throws
an
io
exception.
C
The
point
is
that
the
current
implementation
only
checks
the
end
of
file
when
the
file
is
being
opened
and
once
we
add
something
we
don't
update
the
end
of
file,
and
I
believe
that
it's
a
bug
and
my
argument
for
that
is
that
before.net
6,
we
didn't
have
a
single
unit
test.
That
would
assert
fact
that
we
can
open
the
file
at
something
and
then
seek
backward.
C
However,
for
example,
steven
doesn't
agree
with
my
interpretation
and
jeremy
kuhn.
The
previous
owner
of
system
io,
is
that
it
would
be
a
breaking
change
and
it's
better
to
introduce
a
new
mode.
So.
E
I
think,
in
addition
to
that,
atomic
operations
in
general,
even
for
the
file
system,
can
potentially
be
more
expensive
or
have
different
performance
characteristics
to
non-atomic
operations,
and
I
don't
think
we
want
to
change
an
existing
non-atomic
operation
to
become
an
atomic
operation
by
default.
I
think
we
need
both
four
different
scenarios
and
use
cases.
D
E
There's
no
need
to
be
atomic
if
you're
not
having
concurrent
access
to
the
same
file.
D
B
So
that
one,
I
would
agree
with
because
it's
kind
of
the
the
problem
of
the
shape
of
the
api
not
so
much
the
behavior
of
the
is
right.
It's
a
typical
pattern
of
like
file
exists
and
then
file
open
right,
that's
kind
of
fundamentally
busted,
but
I
think
you
know
to
the
atomic
argument
you
could
say
if
the
user
passes
in
a
file
share
of
whatever
the
equivalent
of
none
is.
I
forgot
what
it
is.
Basically,
when
you
don't
have
any
sharing,
we
could
just
silently
fall
back
to
the
old
behavior.
B
We
don't
pay
the
perth
penalty
right,
so
I
think
you
could
argue
that
ignoring
the
breaking
change,
if
you
do
a
file
share
of
like
read
or
write-
and
you
say
append,
we
basically
have
an
atomic
append
behavior.
It
doesn't
seem.
You
know
at
least
from
a
performance
standpoint
that
seems
reasonable.
You
only
pay
for
it
when
you
actually
care
about
concurrent
access.
B
That
might
be
true
too
yeah
I
mean
my
question
would
be
also.
What
does
the
c
runtime
do
like?
I
have
no
idea
what
the
equivalent
in
f
open
is
for
this
thing,
but
I
I
mean
it
seems
reasonable.
If
the
default
in
c
would
be,
you
know
using
a
you
know,
an
atomic
append,
then
it
can't
be
that
bad
as
a
default
right,
but
yeah.
C
And
to
be
to
be
honest
with
you,
when
I
got
introduced
to
system
io
and
I
became
the
owner
and
I
read
a
book
about,
you
know:
file
files.
Basically,
I
I
was
surprised
with
how
our
current
implementation
was
written
and
also.
I
have
found
few
questions
on
stack
overflow.
Why
aren't
we
using
the
native
capabilities
exposed
by
os
here?
I
mean
the
o
underscore
append
flag
on
unix
and
append
data
file
up
and
data
on
windows.
B
Yeah
I
mean
on
the
one
inside
like
so
the
breaking
change
argument.
I
I
get
it,
but
my
question
really
would
be
how
breaking
do
we
think
this
actually
is
in
practice.
It's
like
you
know.
We
had
this
discussion
now
on
multiple
features,
and
I
really
don't
like
the
idea
of
us
adding
more
and
more
api
surface
to
give
people
more
and
more
options
when
we
believe
those
should
be
the
default
right,
because
it
just
means
that
you
know
by
default.
The
cool
people
write
that
feels
intuitive,
isn't
good
enough
right.
B
So
it
seems
intuitive
if
you
do
file
mode
dot,
append
that
looks
reasonable
right
and
it's
like
well,
that's
the
bad
one.
You
should
use
far
more
dependent
atomic
right
that
I
think,
if
we
do
this
over
time,
like
the
apis,
just
because
more
and
more
complicated
and
and
more
and
more
advanced,
that
doesn't
need
to
be.
So
if
we
believe
the
breaking
change
is
real,
but
it's
you
know
somewhat
fringe.
B
You
know
one
option
we
can
consider.
Is
you
know,
because
this
is
a
4
7
anyways
we
ship
it
in
the
early
preview
of
of
seven.
You
know,
give
people
a
compat,
you
know
switch
to
turn
it
off,
and
then
we
just
see
how
many
people
actually
complain
and
if
nobody
complains,
then
I
guess
it's
good
enough
right.
E
You
could
probably
observe
some
difference
in
the
output
of
the
file
due
to
what
used
to
be
concurrent
writing
no
longer
being
concurrent,
but
I
do
expect
that
there
will
be
potentially
some
performance
differences,
and
that
would
be
my
concern
sure.
E
But
I
think
the
bigger
concern
is
that
the
the
file
apis
in
general
do
not
have
a
lot
of
atomic
operations
exposed
today,
and
so,
if
we
were
going
to
say,
let's
start
making
the
breaking
changes
to
make
things
like
file.append
be
atomic
by
default.
We
should
take
a
more
holistic
approach
at
the
file
system
apis
and
whether
what
it
would
what
it
would
take
to
make
all
of
them
be
atomic
by
default.
If
that's
the
direction,
we
want
to
go
well.
B
Because
I
think
there's
different
kinds
of
atomicity
right:
this
is
just
I
mean
this.
Is
you
know
if
you
I
mean
we
already
had
the
mode
like
in
the
product
forever,
but
you
can
actually
share
the
file
right
so
like
it's
similar
to
like
what
is
this
called
the
name
pipes
where
you
basically
share
the
buffer
for
multiple
processes,
and
you
have
to
coordinate
how
you
write
to
that
buffer
right,
that's
kind
of
similar
to
how
file
share
access
on
the
file
works
as
well
right.
B
E
E
B
I
mean
it's
fair
to
take
a
larger
look.
I
I
just
I
don't
know
I'm
also
kind
of
not
liking
this
taking
smaller
things
or
holding
smaller
things
hostage
for
this
larger
thing
that
we
may
never
do,
because
then
it's
very
hard
to
make
progress.
Like
I
mean
I
mean
it
does
strike
me
as
given
the
api
shape
of
file
stream,
where
we
actually
have
file
share
already.
B
It
doesn't
seem
super
unreasonable
to
say
like
there's
this
super
small
scenario
of
like
yeah.
I
have
a
common
file
that
I
share
and
you
know
all
the
writers
are
just
appending
to
that
thing.
So
I
want
the
append
operation
to
be
atomic
right.
That's
not
like
a
super
complicated
concurrent
scenario
right
and
just
unblocking
that
by
introducing
either
that
flag
or
by
saying
we
change
the
default,
behavior
doesn't
seem
necessarily
a
bad
thing,
even
if
we
never
do
the
larger
thing
right
now.
B
B
B
D
B
Share
you
just
fall
back
to
the
old
behavior
transparently,
like
you,
you,
you
wouldn't
have
to
give
people
an
api
for
that,
and
that's
a
fair
point.
So
I
don't
think
that
that's
why
I
said
earlier
like
to
me
the
pro
argument
sure
we
should
measure
it,
but
I
mean,
if
you
say,
file
share
none.
Then
you
know
the
old
behavior
clearly
is
good
enough,
because
nobody
else.
E
Can
open
the
file
you
might?
You
might
have
file
sharing
and
your
own
existing
primitive
locking
mechanism,
that's
different
from
the
file
system,
one
because
you
might,
for
example,
be
using
a
logger
to
write
to
a
file
and
the
logger
is
only
used
from
within
your
own
process,
and
so,
therefore,
you
have
your
own
synchronization
primitive.
That
is
much
cheaper
than
necessarily
whatever
the
os
is
using,
which
is
more
likely
a
kernel
level
mutex
to
synchronize
across
multiple
processes.
Trying
to
do
the
operation.
C
B
B
B
So
we
could
be,
we
could
take
a
look,
I
think
to
me
would
be
interesting
to
understand
a
bit
more.
What
stephen's
concerns
were,
because
I
don't
think
I
fully
understood
what
you
were
saying
what
he
said,
but
I
I
feel
like
I
have
pasted.
B
C
B
C
B
Yeah,
so
to
me
it's
I
mean
the
fact
that
it
is
a
breaking
change.
We
should
be,
of
course,
deliberate
about
that
we
shouldn't
just
you
know,
stuck
them
in
and
then
just
call
it
a
day,
but
just
because
there
is
a
breaking
change
doesn't
necessarily
mean
to
me
that
we
must
do
an
api.
For
that,
because
I
mean
we
do
have
works
and
we
do
have
the
ability
to
say
sorry,
it's
a
new
major
version.
You
believe
the
new
behavior
is
good
enough.
So
that's
why
to
me
kind
of
depends
on
all
breaking.
B
Is
it
oh
like
do?
We
want
people
to
have
both
options
available
to
them
and
that
so
far
I
would
say
to
me
at
least
it's
not
clear
that
we
that
we
do
know
that
for
a
fact,
it
seems
like
there's
different
opinions
in
the
room
on
whether
people
want
to
explicitly
choose
one
over
the
other.
C
B
C
B
B
Then
there
is
the
scenario
which
you
know
I
have
an
opinion
on,
but
it's
largely
uninformed
when
it's
based
on
gut
feel,
like
I
mean
tennis
example,
makes
sense
to
me
as
a
high
level
that
you
want
to
just
share
within
the
process,
but
I'm
also
not
necessarily
convinced
that
that
means
we
can't
possibly
change
the
behavior.
Like
I
mean
it
would
be
okay
for
me
to
say,
yep,
sorry,
you're,
one
of
those
one
percenters
unhappy
about
the
perf
will
go
use
different
apis
or
go
use
the
the
other
one
right.
C
E
Right
but
but
that's
the
difference
between
what
c
provides
by
default,
what
c
and
c,
plus
plus
provided
by
default
and
what
the
posix
apis
provide
by
default.
One
is
more
like
an
operating
system
level
api,
in
which
case
you're,
not
using
the
standard
library,
you're
directly
key
invoking
effectively
and
we
could
expose
those,
but
that's
then
different
behavior
from
what
the
default
for
a
given
programming
language
languages.
E
C
Okay,
so
I
think
that
we
can
agree
that
I'm
going
to
write
some
benchmarks
measure
the
potential
performance
difference.
C
If
there
is
no
regression,
then
we
are
going
to
reconsider,
introducing
a
breaking
change
if
there
would
be
a
regression.
I
will
be
back
with
this
topic
in
a
week
from
now
we
are
going
to
discuss.
It
again
is.
B
C
Yeah,
I'm
going
to
measure
that
as
well,
and
one
thing
that
is
important
when
it
comes
to
fileshare
is
that
on
windows
and
the
log
files
are
mandatory,
but
on
unix
they
are
just
just
advisory
advisories.
So
anyone
can
choose
to
ignore
the
lock.
Basically.
C
B
E
I
think
it's
good
for
us
to
expose
both
capabilities
and
then
let
users
pick
and
choose,
and
then,
if
we
have
a
use
case
of
well,
it's
not
very
convenient
to
use
these
apis.
We
can
build
something.
On
top
of
that.
That
way,
you
have
the
ability
for
low-level
framework
authors
to
do
the
thing
that
they
need,
while
also
allowing
app
authors
to
do
the
convenient
thing
for
them.
E
Not
just
better
behavior,
I
think
that's
completely
reasonable,
too,
like
if
we
determine
it's,
not
a
terrible
breaking
change
to
to
make
atomic
pen
be
the
default.
I
would
still
be
in
the
position
of
well.
We
should
be
exposing
a
way
to
non-atomically
append
as
an
alternative
that
way.
Framework
authors
who
actually
care
about
the
nuance
and
the
difference
can
do
the
right
thing.
It's
the
whole
reason
we
expose
the
you
know
the
interlock
primitives
in
system
threading
and
the
reason
why
we
expand
those
with
new
types.
D
But
still
can't
you
get
that
just
by
opening
the
file
without
append
using
file
mode
dot
open
the
only
difference
between
that
and
file
mode
to
pen.
Today,
it
sounds
like
is
that
we
do
some
stuff
around
trying
to
prevent
you
seeking,
if
all
you
care
about
is
is,
is
getting
the
performance
of
non-atomic
append.
I
think
you
can
do
that
today
with
filemode.open
right.
D
B
Oh,
I
think
I
oh,
this
is
the
right
thing
here
we
go,
so
I
don't
consider
this
necessarily
here
as
the
average
person's
code
right,
because
it's
people
are
very
often
do
the
file.open
overloads
or
the
convenience
one
like
file,
read
all
text
file
while
text
right
and
then
so
it
might
be.
Okay
to
say
this
is
the
power
user
api
right,
the
constructor
detects
all
the
arguments
and
you
know
you
basically
have
to
know
what
you're
doing
here.
B
So
this
is
the
power
user
api
that
gives
you
more
control,
and
so
we
could
say
the
middle
ground
option.
Is
we
just
think
about
higher
level
apis
to
make
people's
lives
not
more
miserable,
while
also
giving
the
you
know
the
power
user,
the
switches
that
they
need?
That
makes
sense
to
me
because
I
think
that
that's
kind
of
the
you
know
the
the
role
of
the
bcl
is
to
allow
effectively
the
more
advanced
stuff
while
also
making
you
know
normal
code
decent
right.
B
The
only
thing
I
don't
like
is,
if
we
say
like
oh
yeah,
we
messed
up
a
default
in
v1,
and
now
we
are
stuck
with
that
behavior
forever
right.
That's
while
that's
often
true,
there
are
cases
where
we
can
just
make
a
breaking
change
and
make
people's
life
easier
without
asking
everybody
to
call
the
new
overload
and
passing
in
more
and
more
parameters
right
and
whether
that's
the
case
here
or
not,
I
think
that's.
I
don't
have
good
intuition
on
this,
but
I
think
this
middle
ground
might
be
also
an
option.
We
should
consider.
E
C
C
B
I
have
certainly
how
do
I
say
this
about
using
the
effort.
I've
certainly
hurt
myself
badly
by
using
memory
streams
and
and
having
issues
with
either
dispose
or
not.
Flushing
streams
in
the
middle
that
that
is.
That
is
the
thing.
B
A
I
am,
although
neither
jeremy
or
levi
are.
B
A
B
B
So
the
the
problem
is
that
the
existing,
so
that,
once
you
want
to
obsolete,
they
basically
choose
the
number
of
the
the
the
hash
algorithm
is
basically
baked
into
this
thing
right
and
we
don't
believe
that
security
more
people
should
actually
choose
the
other
overload
that
makes
them
choose.
The
hash
algorithm
is
that
is
that
correct.
A
Yeah
so
there's
a
bunch
of
constructors
on
rfc,
2898
derived
bytes
that
you
know
accept.
You
know
the
number
of
iterations
and
the
hash
algorithm,
so
this
class
was
introduced
in
2005
mm-hmm,
you
know,
and
for
the
constructors,
where
we
don't
accept
the
hash
algorithm
or
the
number
of
iterations
we
pick
ones
for
you
and
in
2005
the
ones
that
we
picked
made
sense
in
2021
they're,
not
good
anymore.
A
We
can't
change
the
defaults
because
we
need
deterministic
behavior
and
we,
even
if
we
could
figure
out
a
way
to
change
the
faults
without
breaking
people.
Whatever
we
choose
now
would
be
bad
again
in
another
16
years.
B
A
Yeah
exactly
so,
this
is
something
you
know
that
where
you
know,
if,
if
you
do
want
to
change
this,
you
know
somebody
has
to
you
know
version
this.
You
know
correctly
within
their
own
application.
It's
not
something
that
we
can
just
you
know
change
for
them.
A
The
one
thing
that
probably
makes
this
a
little
bit
difficult
is
the
constructor
that
accepts
the
hash.
Algorithm
is
actually
very
new,
was
added
in.net,
core
2.0
or
net
framework
4.7.2
yep,
so
the
constructors
that
aren't
taking
the
hash
algorithm
probably
have
very
high
usage.
A
The
suggestion
to
work
around
that
and
I
was
to
implement
an
analyzer
that
had
a
quick
fixer
that
would
move
you
to
a
constructor
that
made
those
defaults
explicitly
called
out.
So
we
wouldn't
be
able
to
change
anything
for
you,
but
we
could
at
least
quick
fix
you
to
a
constructor
that
did
fill
in
the
default
values
for
you,
so
that
when
you
look
at
that
line
of
code,
you
go
like
oh
wait.
A
second
I'm
using
shot
one
or
right,
I'm
using
a
thousand
iterations
and
the
obsolete
should
steer
people.
B
Right
and
then
presumably
other
analyses
would
kick
in
that
says,
like
hey
you're,
explicitly
using
shell
one,
you
probably
shouldn't,
which
is
another
analyzer-
that
I
think
we
have,
or
at
least
we're
planning
on
having.
So
that
seems
sensible.
I
mean
I
mean
I
think
to
me.
The
availability
doesn't
really
matter
because
when
we
apply
obsolete
we
do
it
in
some
version,
and
that
version
has
the
other
overload.
So
that's
actionable
right.
It's
that's
all!
So
that's
not!
B
D
B
So
if
you
get
the
obsolete
message,
the
other
constructor
is
there
for
you
to
call.
So
it's
it's.
It's
not
a
problem
right,
so
you
would
only
get
the
warning
on.net,
seven
right
and
then,
while
in
dot
net
seven,
you
have
the
other
overload
available
for
three
versions
now,
so
it
should
be
good
or
five
versions.
I
think
even
right
now
I've
got
six
versions.
B
Two
four
five,
six
seven
yeah.
I
don't
know
what
doesn't
show
in
dot
net
five.
Oh
because
it's
in
a
different
thing,
never
mind
yeah,
I
can
oh
yeah
branding
doesn't
help
so
so
in
that
sense,
I
think
I'm
okay
with
the
obsoletions.
The
fixer
is
interesting
because
I
think,
as
you
said
like
it's
not
like
people
probably
want
to
make
sure
that
they
at
least
can
replicate
what
they
currently
have.
B
B
And
I
think
jeremy
suggested
a
message
anyway,
which
I
liked,
because
then,
if
you
do
the
quick
fixer
cooler,
if
you
don't
do
the
quick
fix
over
at
least
you
have
it
in
the
error
messages,
and
then
you
can.
I
don't
know
if
it's.
If,
if
it's
a
thousand
or
if
it's
a
you
know
azure
algorithm.1,
then
I
think
this
would
be
good
to
need
to
indicate.
B
Okay,
then,
let
me
just
do
this
api
needs
to
work
so
that
you
know,
I
think
it's
good,
but
I
think
it
would
be
useful
to
you
know,
get
jeremies
and
leave
us
take
on
the
final
messages,
but
I
think
the
one
thing
we
should
also
do
is
assign
the
diagnostic
id.
So
we
can
document
it,
and
then
people
can
also
suppress
essentially,
if
they
really
want
to
do
it,
because
it
is
for
overloads.
B
B
E
B
And
I
think
this
was
yeah.
I
think
this
is
something
that
I
already
looked
at
offline.
It's
pretty
straightforward.
You
know
it's
just
another
thing
to
expose.
We
already
have
arabic
extended
a
so
it
seems
pretty
straightforward
to
expose
a
very
big
extent
at
b.
So
I
I
mean,
unless
somebody
has
an
objection,
I
would
just
improve
it.
It
seems
like
a
no-brainer.
B
E
E
G
B
B
G
E
E
E
So
it
might
be
confused
confusing
to
some
people
who
think
that
it
means
like.
Oh
maybe
there
is
some
randomness
to
the
output
and
I
want
it
to
be
deterministic,
but
I
think
for
the
people
who
know
what
a
d,
what
dfa
is
for
regex
they'll
instantly
recognize
it
and
everyone
else
will
be
able
to
read
the
summary
and
understand
it
there
as
well.
C
B
Yeah,
I'm
fine
with
calling
it
either
dfa
or
spelled
out.
I
think
if
you
do
regex,
you
probably
know
what
a
dfa
is
would
be
my
guess
at
least
I
knew
what
the
dfa
is
well
before
I
started
computer
science.
G
B
Yeah
I
mean,
given
that
he
listed
the
other
ones.
I
think
he
just
wanted
something
in
the
api
proposal.
I
don't
think
that
he's
probably
his
preference,
but
I
don't
think
he's
necessarily
gung-ho
about
it.
I
mean
personally,
I
think
they
all
find
choices
because
they
all
kind
of
like
one
of
them.
I
really
like
you
know,
oh
my
god.
This
is
the
name
right,
they're,
all
convolutions,
to
a
certain
extent,.
B
You
compile
tfa
with
with
compiled,
or
is
this
okay,
so
it's.
E
B
Yeah
I
mean
it
seems
like
that's
true,
regardless
right,
because,
basically,
when
you
activate
that
mode,
there's
restrictions
on
what
rejects
is,
you
can
use,
and
it
has
a
particular
set
of
you
know
implications
for
your
runtime
right,
which
you
need
to
research,
no
matter
what
so,
I
think,
like
whatever
the
documentation
is
for
this
parameter.
Sorry
for
this
enum
flag
would
explain
what
a
dfa
is,
and
probably
the
very
first
sentence
right.
G
E
In
particular,
my
only
concern
with
just
dfa
is
that
it's
a
three-letter
acronym
and
it's
going
to
be
capital
d,
lowercase,
f,
lowercase,
a
because
that's
the
naming
convention
for
net,
and
so
just
bfa
seems
not
friendly,
but
dfa
engine
or
dfa.
E
G
H
Here
the
algorithmic
complexity
is
bounded.
The
thing
I
mentioned,
oh
sorry,
so
I
jumped
because
I
realized
steven's
out,
so
I
thought
I'd
stick,
my
head.
The
thing
I
tried
to
mention
in
the
chat
is
that
it's
not
at
all-
and
I
think
eric
was
suggesting
this-
it's
not
at
all,
necessarily
a
case
where
you
have
to
where
you
have
to
when
you're
picking
for
this
and
now
you're
choosing
between
fast
and
safe.
This
may
well
be
fast
and
safe.
So
something.
D
H
Constrained
could
date
quite
quickly
because
it's
like
we
want
everyone
to
use
constrained,
it's
sort
of
like
mini
dumps,
are
actually
the
biggest
dumps.
There
were
didn't
age,
and
so
I
don't
know
how
that
might
be
something
that
makes
constrained
seem
a
little
like
you're
compromising,
but
but
actually
you
may
not
be
compromising,
and
so
that's
why
I
would
be
less
inclined
to
that
and
dfa
is,
is
implying
an
algorithm,
that's
probably
probably
a
stronger
implication
than
it
should
be
because
they
jump
into
like
nfa
in
the
middle.
H
If
it
looks
like
it
needs
to
so
it's
really
dfa
and
kind
of
flavor
rather
than
actuality,
and
and
so
it
would
be
nice
if
we
could
somehow
enter
at
its
behavior
rather
than
its
implementation,
that
that
was
just
my
take.
B
I
don't
know
what
to
do
with
that,
because
it
mean
it
seems
like
we
want
this,
that
everybody
should
have
that
as
the
option
moving
forward,
which
kind
of
makes
me
think
the
name
needs
to
be
that
peop
that
you
know
would
you
know
people
would
see
that
and
say
like?
Oh,
I
probably
should
specify
that
which
means
any
sort
of
lingo
like
dfa
or
constrained,
or
you
know
that
kind
of
implies
people
do
research
first,
so
that
it's
more
like
an
advanced
thing
that
most
people
wouldn't
do.
B
And
yes,
we
could
have
an
analyzer
that
tells
people
to
do
that,
but
it's
probably
super
annoying.
We
wouldn't
do
it,
so
it
kind
of
I
think
we
need
to
have
a
good
name
that
seems
simple
like
compiled.
For
example,
right
compiled
sounds
positive.
It
sounds
fast
people
not
kind
of
get
a
sense
of
what
it
does
so
kind
of.
F
B
Yeah,
it
totally
is
true.
Is
I
mean
it's
just
a
question
of
like?
Do
you
think
a
person
using
this
api?
You
know
doing
regex
options
dot,
seeing
the
name
dfa?
Are
they
likely
to
or
it
in
or
not,
and
that
one
I'm
not
sure,
because
we
know
that
historically,
the
more
obscure
the
name,
the
the
people
will
not
start
with
those
usually
right
and
then
oh,
the
stuff
works
and
then
just
move
on
right.
B
E
E
Similar
to
the
conversation
that
we
had
on
file
system,
have
we
considered
taking
a
breaking
change
and
making
this
the
default
and
adding
a
new
legacy
option
before
you
want
the
old
behavior.
So
so.
H
G
H
We
have
discussed
quite
a
bit
whether
we
could
essentially
channel
existing
calls
into
this
engine,
where
we
know
that
this
the
pattern
is
such
that
they
will
get
the
same
or
better
results,
and
that
is
potentially
a
future
direction
to
go
for
dot
for
in
you
know
some
future
time,
but
right
now
we
just
don't
have
the
confidence
both
in
detecting
which
patterns
would
work
and
also
in
that
the
engine
is
quite
as
stable
and
well
characterized
as
existing
one.
B
H
B
Whatever
I
think
what
tenno
was
suggesting
is
what
what
I'm
suggesting
here,
which
is
the
you
you
don't
look
at
the
actual
regex
and
decide
which
engine
to
use
you
just
say:
you're,
not
on.net.
Seven.
By
default,
you
get
the
new
engine.
The
new
engine
may
not
handle
everything
and
the
other.
If
you
want
the
other
behavior
you
have
to
or
in
some
use
legacy
engine
flag
or
something.
H
We
can
talk
about
it,
but
unfortunately
backtracking
is
always
gonna
break
right.
So
I
don't
have
a
good
mental
picture
of
how
much
that
is.
There's
also
lesser
concern.
There's
some
things
like
atomic
assertions
that
aren't
implemented
yet,
but
but
there
will.
D
B
B
I
mean
if,
if
that's
the
case,
then
it's
a
bit,
then
maybe
the
name
doesn't
matter
so
much
because
it's
always
something
people
have
to
choose.
You
know
kind
of
like
how
they
choose
artisanal
bread.
At
that
point,
it
just
becomes
a
preference
thing
for
for
this
scenario,
in
which
case
the
name
should
be
descriptive,
but
if
we
think
the
vast
majority
of
people
should
really
use
the
new
engine,
then
it
seems
unfortunate
if
we
have
to
have
an
option
that
people
have
to
pass
in
rather
than
just
making
the
default.
H
It's
tricky
another
thing
to
throw
into
this
is
the
api.
Existing
regex
apis
are
rather
rich
and
defeat
some
optimizations,
such
as
doing
everything
entirely
on
spans
right
and
they're
much
richer
than
almost
all
the
other
engines.
So
we've
discussed
potentially
adding
a
new
api,
and
at
that
point,
of
course
we
could
make
it
use
the
new
engine
entirely
and
nothing
else.
G
G
H
B
Yeah,
we
should
send
them
the
recording
which
fortunately,
we
have
now
but,
like
I
feel
like
yeah,
I
mean
it
seems
like
it's
largely
a
discussion
on
two
things:
one.
What
do
we
name
this
puppy
and
then
the
other
one?
Is
you
know
what?
How
do
we
make
sure
that
people
pick
it
right,
because
we,
I
still
think
based
on
the
conversation,
we
would
like
most
people
to
pick
this
thing
or
the
cases
when
they
can
and
only
use
the
other
engine
when
they
actually
need
the
other
behaviors
right,
which
it.
E
Would
seem
like
picking
the
right
option
if
it's
not
the
default
would
be,
but
and
even
if
it
was
the
default,
and
we
needed
people
to
know
when
they
had
to
pick
the
other.
Then
an
analyzer
is
the
right
approach
for
it.
Cyrus
added
support
to
rosalind
a
while
back
for
correctly
processing
and
syntax,
highlighting
regex's,
and
I
think
the
same
apis
should
be
usable
for
doing
basic
analysis
of
like
does
this
contain
backtracking
and
therefore
does
it
require
the
nfa.
H
Right
the
funny
thing
there
is,
of
course
tanner.
If
your
regex
is
known
at
compilation
time,
probably
the
future
does
not
currently
exist.
Source
generator
is
always
going
to
be
better.
So
if
the
dynamic
choice
would
be
made
on
dynamic,
regexus
and
then
analyze,
it
presumably
couldn't
help
with
that.
Yeah.
B
And
cyrus's
thing
is
fairly
dumb.
So
as
soon
as
you
extract
things
in
locals,
for
example,
it
sometimes
doesn't
colorize
anymore
because
they
have
to
do
flow
analysis
to
figure
out
what
is
the
regex
and
what
isn't
so,
it's
unless
you're
willing
to
do
flow
analyses
yeah
in
the
analyzer
wouldn't
be
super
cheap.
B
B
All
right
so
then,
I
think
that's
it
for
today,
unless
I
missed
something
nope,
that's
it!
So
then
thanks
everybody,
and
then
I
see
you
next
week,
bye,
bye.