►
From YouTube: DASH Workgroup Community Meeting Aug 17 2022
Description
Keysight/Mircea Q&A
Discuss keepalives in CPS test
Volodymyr Mytnyk - SAI PTF update
PR Reviews
193 Dash Outbound pipeline packet test - needs Marian and Reshma review
187 Default drop support
194 Fix inbound VNET
190 spell check
A
Meeting
for
august
17th
and
I'll
go
ahead
and
present
my
screen.
A
And
let
me
know
if
you
can
see
yes:
okay,
yeah,
okay,
so
last
week
when
we
went
over
quite
a
bit
and
we
had
an
awesome,
huge
presentation
by
keysight
and
mercha-
and
we
talked
about
these
two
pull
requests
here
with
the
config
generator
and
the
hero
test
update
and
we
I
sent
notes.
But
we
talked
about
all
these
different
things.
While
we
were
talking
about
the
presentation-
and
specifically,
we
talked
about
the
need
to
talk
about
whether
we
need
that
keep
alive
in
the
connections
per
second
test.
A
So
I
was
hoping
we
could
talk
about
that
a
little
bit
more
today
or
if
anyone
had
thought
about
that.
In
the
last
seven
days
I
was
hoping
we
could
have
a
short
q,
a
on
that,
maybe
starting
out
with
that.
If
you
wanted
to
do
that
mercha
and
then
I
was
also
hoping
we
could
talk
about-
maybe
moving
towards
stage
three
of
the
test
maturity
stages.
B
Yeah
for
the
keep
alive,
so
there
are
two
things
that
we
I
mean
we
can
do
it's
either.
The
flow
timer
could
be
modified
from
one
second,
let's
say
to
two
seconds
or
three
seconds,
and
then
the
keeper
live
is
not
needed,
but
then
it
changes.
The
fact
that
hero
test
is
keep
the
flow
timer
at
one.
Second,
that
will
be
a
possibility
not
to
send
the
keep
alive,
the
other
one.
B
Since
the
data
was
already
sent,
we
can
basically
now
send
the
keep
alive
and
then
the
flow
will
be
terminated
by
the
dpu.
B
I
don't
think
it
will
affect
too
much
the
stats
on
my
side,
but
probably
the
dpu
will
consider
these
flows
as
terminated
if
it
holds
such
stats.
So
yeah.
B
B
Yeah,
sorry,
for
that
I
can
give
a
quick
summary.
So,
basically,
the
moment
device
has
a
capability
of,
I
know-
let's
say
3
million
cps,
but
the
requirement
is:
keep
six
million
active
flows.
B
You
can
bring
only
three
million
up
every
second,
so
for
a
second,
you
bring
three
million
ups
on
the
second.
Second,
you
bring
another
three
million
for
the
total
of
six
million.
Now
the
first
three
million
flows
they
need
to
leave
for
two
seconds.
Otherwise,
if
you
terminate
them
on
the
second
second,
you
will
not
have
six
million
flows
on
the
dp.
You
will
have
only
three
because
you
expire
the
other,
so
that
means
that
the
flow
needs
to
leave.
B
Let's
say
two
seconds
in
this
case,
and
if
you
need
to
leave
it
for
two
seconds,
you
need
to
send
the
keep
alive
or
two.
Otherwise
the
flows
will
be
expired
with
a
timer
of
one.
Second,
so
that's
add
an
extra
packet
or
two,
because
if
you
actually
need
to
keep
it
for
two
seconds,
you
want
to
send
the
packet
at
0.999
seconds
kind
of
and
not
wait
for
the
second
to
pass,
because
otherwise
you
know
the
flow
is
gone,
expired
by
the
dp.
B
So
that
is
a
context
and
that
adds
an
extra
packet
and
where
the
discussion
started
is
because
the
hero
test
specifies
six
packets
and
talks
about
the
tcp.
Packets
does
not
mention
anything
about
data
and
neither
about
the
keeper
live
when
it
does
the
math
and
yeah
I
believe
there
are.
There
will
be
data
packets,
which
will
be
two
packets,
so
six
plus
two
plus
a
keep
alive.
B
Like
I
said,
the
solution
is
one
yeah.
We
can
send
the
keep
alive.
That's
one
solution,
the
other
will
be
increase.
The
flow
timer
from
the
test
from
one
second
to
two
seconds
or
three
seconds
and
the
other
one
will
be
basically
don't
send
the
people,
even
just
let
the
flow
be
terminated
by
the
dpu
and.
A
Does
anyone
have
strong
opinions
on
this?
I
know
we
talked
about
it
a
lot
last
week.
D
And
russia,
just
one
additional
suggestion
in
term
of
just
leaving
the
timer
at
two
seconds.
Can
you
just
add
some
variant
to
it
so
that
we
can
have
the
flow
aging
out
at
the
different
time
so
that,
basically
you
you
will
handle
the
case
where
new
flow
is
being
learned?
Why
the
existing
flow
also
being
time
out
so
basically
have
the
capability
to
vary
in
the
flow
time
between
one
and
two
seconds.
B
Oh
yeah,
the
flow
timer
can
be
configured.
The
discussion
was
only
about
the
fact
that
the
test,
as
it
is
described
in
the
hero
test
document
that
is
present
in
git,
says
one
second,
so
yeah.
D
B
D
C
B
Yeah,
it's
it's
configurable
people
can
put
it
to
two
seconds
three
seconds
or
one.
Second,
I
was
just
bringing
it
up
that
either
the
number
of
packets
increases
or
either
we
need
to
increase
the
flow
timer.
Both
are
configurable.
So.
D
Yeah
yeah
yeah.
I
don't
mean
to
ask
any
change
to
the
approach
that
you
mentioned.
I
meant
to
ask,
even
though
we
configured
like
one
second
or
two
seconds,
should
we
allow
the
configuration
like
within
like
between
one
and
two
right,
so
that
for
that
three
million
flow
you
can
vary
in
the
timeout.
D
B
The
test
requires
that
to
have
all
these
flows
aging
out
and
new
flows
being
installed
every
second-
and
this
is
mostly
like
a
stressful
scenario-
it's
not
let's
say
the
real
use
case.
I
think
in
the
real
use
case,
this
flow
timer.
It's
at
I
don't
know
60
seconds
90
seconds.
I
think
it's
set
much
higher,
but
in
order
to
stress
the
device,
the
most
stressful
will
be
basically
one
second
and
basically
have
new
flows
added
and
removed
every
second.
So
this
is
just
a
stress
scenario:
it's
not
the
real
use
case.
B
E
Hey,
actually,
I
have
a
question
here,
so
the
flows
that
you're
talking
about
here
is
this
the
background
pro
you're
referring
to
or
the
cps
flows.
E
So
for
cps
flows
at
least
my
understanding
is
like
a
six
packet
flow
right
like
so.
It
just
opens
the
connection
and
then
closes
like.
So
why
do
it
has
to
stay
for
like
more
than
a
second
like,
like
it
should
get
close
within
a
few
microseconds.
B
B
The
problem
is:
if
the
dpu
is
not
able
to
do
5
million-
and
it
does
let's
say
2.5
million,
then
you
need
to
have
the
flows
live
for
two
seconds
in
order
to
have
a
flow
table
installed
in
the
dpu
of
five
plus
one
plus
one
million.
Otherwise,
your
flow
table
will
be
only
it.
It
will
be
very
small.
The
flow
table
will
not
get
full
if
you
don't
keep
those
flows
alive.
E
Now,
that's
the
part
I'm
not
getting
like
so
why?
Why
is
that
a
focus
right
like
so?
We
have
a
background
traffic
with
like
as
per
the
euro
test,
2
million
tcp
and
2
million
udp
background
traffic
and
then
whatever
cps,
that
the
dpu
can
do
like.
If
it
is,
let's
say,
3
million
cps,
then
off
of
it
will
be
tcp,
3
million.
Sorry
1.5
is
tcp
1.5
s
udp,
so
the
flow
table
usage
will
be
like
4
million
for
background
and
1.5
for
udp.
E
Since
we
have
one
second
timeout
right,
then
the
tcp
will
just
at
any
given
point
of
time.
Maybe
there
will
be
like
500
to
100
active
flows
within
the
dpu,
so
but
that's
the
expectation
right
like
so
yeah
like
I
I'm
not
getting
why
the
usage
of
the
flow
table
within
the
gpu
is
a
focus
right.
So
this
this
is
just
working
as
per
the
expectation,
so,
like
the
connection,
will
just
come
and
go
because
it's
going
to
stay
alive
only
for
within
those
few
microseconds.
B
Yeah,
so
let
me
share
my
screen
and
I'll
show
again
the
paragraph
that
is
in
question
flow
table
size
you
need
to
have
14
million
float,
work,
14
or
7
million.
When
I
talk
it's,
if
it
be
directional
or
not,
some,
it's
calculated
a
7
if
you
have
both
directions
or
14..
So
this
is
what
I'm
talking
about
flow
table
size
I
need
to
see
in
your
flow
table
or
the
device
flow
table.
14
million
flows.
B
Okay,
you
have
two
from
tcp
two
from
udp
background
traffic
and
these
are
from
the
cps
ones.
If
I
am
to
keep
this
flow
table
at
this
size,
if
the
dpu
cannot
do
5
million
cps,
I
need
to
leave
the
flows
longer.
Otherwise
they
go
out
of
the
table
and
the
table
never
has
14
million
size.
This
is
the
line
that
I'm
referring
yeah,
but.
B
Be
agreed
with
everyone
so
yeah,
you
know
we
can
change
here
and
put
two
seconds
and
then
it's
fine,
maybe,
but
then,
if
somebody
cannot
do
2.5
million
cps
and
they
need
three
seconds
and
again
the
test
changes.
So
I'm
okay
with
changes.
This
is
why
I'm
saying
there
are
options,
but
there
will
be
changes
and
somebody
needs
to
agree
mainly
basically
microsoft.
I
would
say
so.
B
F
Change
the
test,
I
think
the
problem
with
this.
The
way
this
is
written
is
that
there's
no
explicit
description
of
the
flow
lifetime,
like
you're
inferring
the
flow
lifetime
from
this
line
in
this
in
this
bullet,
and
I
think
it
would
be
better
if
this
would
like
explicitly
you
know
said
the
flow
lifetime
should
be
x
and
then
you
know
the
flow
table
size
is
what
it
is
based
on,
whatever
the
flow
lifetime,
whatever
the
flow
lifetime
is
it's
I
just.
I
think
it's
like
the
specification
of
this
hero
test
is.
B
F
B
F
B
A
B
Don't
find
a
way
without
sending
the
packet
to
have
this
flow
table
and
actually
ask
the
dpo
hey:
what's
your
flow
table
size
and
I
get
really
small
flow
table
sizes?
If
I
don't
yeah,
I
I
did
initially
the
test
which
I
was
terminating
and
sending
the
fin
and
everything
super
fast.
But
then
this
flow
table
size.
When
I
asked
the
dp
what
your
flow
table
size
was
really
small,
like
10
000
200
k,
not
even
reaching
a
million.
E
Yeah,
so
if
you
see
here
right,
it
clearly
says
tcp
connection
is
established
and
terminated
without
any
data
packages,
so
cps
tcp
connection
has
to
be
six
packets
right
and,
like
so
like
john
said,
like
I
think
like
clarifying
the
lifetime
of
the
cps
connection
would
help,
but,
but
as
far
as
I
understood
like
it,
just
does
the
six
packet
transaction
and
it
just
disappears,
but,
like
you
have
off
of
the
cps
connection
at
udp
right,
so
you
will
still
get
like
in
this
5
million
cps
you.
E
F
B
B
D
B
F
I
agree
I
agree,
but
I
think
that
that
should
be
like
that
should
be
tested
by
specifying
what
the
flow
lifetime
is
in
conjunction
with
the
aging
time
you
know,
and
and
so
like
you
specify
those
and
the
flow
table
fills
fills
up
like
I
think
I
guess
my
biggest
issue
is
like.
F
I
don't
think
that
you
ever
need
keep
a
lives
for
for
the
cps
load
and
I
think,
like
you're
inferring,
the
keeper
lives
because
of
this
bullet,
and
I
think
it's
just
wasn't
clear
enough
to
say,
like
you,
never
need
keep
a
lives.
If
you
really
want
to
test
like
a
full
table
that
you
would
control
that
with
the
aging
time
and
and
the
flow
lifetime.
G
B
So
if
I
close
the
flows
as
soon
as
possible,
then
when
you
ask
the
dpu
for
stats
what
your
flow
table
size,
the
return
value
is
very
small
compared
to
this.
So
the
way
I'm
inferring
and
I
rendering
this
it's
flow
table
size,
equal,
14
million,
okay-
and
this
is
just
the
map
how
they
reach
to
14
million.
So
when
I
do
a
dpu
get
flow
table
size,
I
need
to
see
a
number
close
to
14
million
during
the
test.
This
is
what
I'm
inferring
this
map.
B
C
F
I
interpret
this
differently.
What
I
interpret
this
is
to
say
that
your
data
plan
should
be
provisioned
with
a
flow
table
of
14
million
flows,
and
that
you
know
the
dynamics
of
the
test
will
determine
like
how
full
it
is
out
of
those
14
million
like
like
when.
If
someone
said,
what's
your
flow
table
size,
hey,
I've
got
a
dpu.
What's
your
flow
table
size,
oh,
our
flow
table
size
is
64
million.
It's
like
a
static
number
of
your
capacity.
F
F
F
A
A
Yeah
exactly
yeah
exactly
and
that's
good
that
we
can
run
this
down
so
we'll
try
and
run
this
down
for
you.
But
you
know:
beware,
I
might
send
you
guys
an
email
just
to
clarify
like
okay,
john
okay,
restaurant
mercha,
like
our
sores
ned,
I'm
just
going
to
call
you
spread
ned.
Are
we
all
on
the
same
page
as
to
what
we
mean
here
when
we're
taking
it
to
the
scn
team?
You
know
as
to
exactly
what
we're
testing
so
and
gerald.
Of
course,
when
he
comes
back
so.
C
B
Okay
and
in
the
lab
just
to
be
clear,
I
try
all
variations
here.
I
try
basically
don't
care
about
this
value,
just
let
it
go
as
fast
as
possible
and
see
what
it
is.
But
then
I
don't
see
this
or
increase
the
flow
timer,
and
then
you
see
this
when
you
see
what
happens
so
yeah
it's
for
me,
it's
very
easy.
Just
we
need
to
decide
which
version
we
want
to
do.
A
A
G
Christina,
I
think
we
had
a
small
update
about
the
sciptf
framework
that
we
had
adapted
to
dash
and
that's
now
being
used
in
the
ci
that
chris
had
so
maybe
voldemort
met
nick.
Would
you
like
to
talk
about
it
a
little
bit
question?
Is
it
okay.
H
Second
yeah,
so
this
one,
so
what
we
did
here
is
the
we
changed
ci
and
we
talked
with
the
crease
about
this
above
the
change.
So
we
change
to
ci
to
use
the
development
actually
branch
which
includes
the
changes
for
the
ptf
would
be
down
that
we
done
for
the
dash
and
yeah
so
right
now,
anytime,
you,
you
run
the
view.
You
create
a
pull
request.
H
The
same
will
trigger
on
base
on
that
branch,
and
it
means
that
you
can
now
start
using
the
site
ptf
to
write
your
test
case
and
use
all
changes
to
be
done
for
the
dash
and
yeah.
So
this
is
the
the
one
of
the
also
development
branches
that
we
are
going
to
create
the
pull
request
for
this.
So
we
we
put
the
example
of
the
overlay,
let's
say
scenario
for
the
v-net,
so
it's
just
an
example,
so
it
still
needs
to
be
updated
based
on
the
latest.
H
H
We
have
some
base
basic
stuff
that
needed
to
to
run
it,
and
you
can,
you
know
just
to
see
how
it
looks
like
what
classes
you
have
to
use
you
know,
and
what?
H
What
stuff
you
need
to
do
just
to
set
up
and
tear
down
your
your
setup
and
to
be
able
to
run
the
test
case
you
can.
You
can
use
the
document
that
is
referenced
in
the
dash
repo,
so
there
is
a
ptf
user
guide,
so
you
can
use
it
as
an
instruction
to
run
your
ptf
test
cases.
For
example,
in
our
case
it
can
be
the
cybernet
and
actually
that's
it.
So
just
you
know
the
brief
introduction
into
the
ptf
test
cases
how
it's
going
to
look
like.
H
I
know
that
there
is
some
test
case
in
the
vignette
folder
right
now,
so
probably
we
can
during
the
time
we
can
merge
them
into
one
test
case
on
house
and
them
separately.
I
Vladimir
thanks
as
you
and
I
were
talking
friday
about
this,
it
would
be
nice
to
update
the
spy
ptf
user
guide
to
be
a
little
more
practical
towards
like
a
dash
use
case,
because
it
sort
of
has
that
the
user
guide
has
elements
to
maybe
a
vs
type,
sonic
vs
type
test
or
a
basic
switch
type
test.
I
But
it
doesn't
really
talk
about
like
how
you
do
an
actual
dash
dp
test.
So
it'd
be
nice
as
time
goes
on
to
document
a
little
more
practical
setup
and
usage.
G
And
in
the
user
guide
today
I
think
we
have
the
rift
test,
which
is
the
router
interface,
very,
very
basic
test.
To
start
with,
just
to
you
know
again,
the
user
guide
is
basically
saying
how
to
use
the
test
framework
and
how
to
integrate
the
test
cases
into
the
framework
and
all
that
how
to
run
it
but
yeah.
I
guess
you
know
once
we
have
the
v-net
case
solidified
the
test
case,
we
could
possibly
use
that
there
as
well
or
create
another.
I
Yeah
in
particular
I'm
talking
about
like
how
you'd
like
a
little
diagram
that
shows
actual
connections.
You
know
right
now.
The
user
guide
leaves
a
lot
to
the
imagination.
You
know
it
shows
an
example
of
a
command
line,
but
it
has
like
the
ethernets
in
there
which
you
wouldn't
use
for
a
real
dpu.
You
can
use
ethernet
and
nick
ports
just
something
a
little
more
practical,
so
someone
could
actually
follow
the
instructions
and
just
get
get
an
example.
Working
right
now,
they're
still
missing
missing
pieces
in
the
instructions.
I
That's
what
I
mean
and
vladimir-
and
I
talked
about
this,
so
I
think
he
knows
what
I'm
talking
about.
You
know
it
has
helped.
It
has
examples
like
for
a
sonic
switch
or
yes,
but
not
a
dpu.
I
You
know
over
time,
but
I
agree
that
this
pr
was
really
important,
just
just
to
sort
of
provide
a
little
more
perspective.
I
I
was
also
mukesh
from
amd,
also
found
out
that
the
the
ptf
ptf
repo,
that
was
in
use
in
side
thrift
was
not
supporting
vxlan
properly
and
he
came
up
with
a
patch
for
that.
But
then
talking
with
vladimir,
he
had
a
better
long-term
approach
and
that's
to
update
ptf
itself,
and
so
this
pull
request.
He
just
did
and
we
committed
it
uses
the
development
branch
of
psy,
the
dash
branch,
which
has
a
development
branch
of
ptf
to
fix
these
vxlan
problems.
So.
G
Right,
I
think
voldemort
nick
had
come
across
this
vxlan
problem
during
the
v-net
test
case
yeah
that
was
being
written,
so
he
had
made.
I
G
Right
that
is
yeah
reason
why
we
wanted
to
talk
about
it
today
is
because
we
are
seeing
you
know
more
usage
of
this
framework,
so
thanks
very
much
to
mukesh
and
makesh.
Please
reach
out
to
voldemort
mitnick
myself
and
chris.
If
you
have
any
issues-
and
you
know
we
can
probably
everyone
work
together
as
christina
was
saying,
you
know,
for
the
third
stage
of
the
measures
test
maturity,
we
could
start
to
use
this
yeah.
C
I
G
H
Yeah,
so
I
can
explain
briefly
so
the
one
the
issue
that
you
faced.
This
is
only
one
issue,
but
once
you're
trying
to
send
the
packet
and
compare
these
long
packets,
then
you
can,
you
may
hit
the
problems
so
to
fix
it.
We
need
to
patch
the
ptf
again.
So
this
is
why
we
provided
the
development
version
of
the
ptf
and
probably
we're
going
to
upstream
it
yes.
So
there
is
another
problem.
G
Basically,
has
the
you
know,
framework
that
is
adapted
to
dash
that's
the
main
reason,
and
it's
not
yet
there
inside,
and
we
wanted
all
of
us
to
use
it
before
we
upstream
it
to
site,
but
we
could
start
the
process
in
parallel.
I
I
It
turned
out,
I
was
working.
I
was
working
both
with
you
last
week
and
then
vladimir,
we
had
conversations
and
we
put
it
all
together.
I
think
this
is
the
final,
better
interim
solution:
yeah
yeah,
but
thanks
for
you
know
both
you
all
your
work
on
this
and
we're
all
finding
these
little
land
mines
and
fixing
them.
I
So
do
we
want
to
spend
a
couple
minutes
just
looking
at
outstanding,
prs
and
and
try
to
get
some
motion
in
a
few
areas.
We
have
a
few
fairly
simple
pr's
in
the
pipeline
that
probably
need
waiting
for
reviews.
A
I
A
Do
you
want
a
presenter?
Do
you
want
me
to
do
it.
I
Can
you
see
so,
let's
just
let's
just
kind
of
go
through
some
of
the
ones
that
are
top
of
mind?
For
me,
one
of
them
is:
is
this
one
that
mukesh
started?
This
began
the
whole
vxlan
journey
in
this
discussion
we
just
had
but
he's
waiting
for
some
reviews.
I
know
reshma,
you
had
asked
to
review
be
on
the
reviewer
last
week.
I
think
last
thursday
and
and
mariam's
in
the
queue.
So,
if
you
guys
could
just
look
at
this
code,
you
know
and
sorry
here's
all
the
changes.
I
If
you
want
to
just
take
a
look
through
all
this
now
lukas,
you
need
to
kind
of
back
out
some
of
these
little
changes
to
go
back
to
what
vladimir
did
just
back
out
a
couple
things
and
retry.
I
If
it
doesn't
work,
we
need
to
know
right
away
and
fix
it.
You
know
we'll
do
that
and
do
whatever
it
takes
to
get
it
going,
because
we
don't
want
a
broken
pipeline.
But
there's
just
you
know,
there's
some
changes
here
that
we
need
reviewing
on
or
just
you
know,
approve
it
and
say
the
mood
cash
you're.
I
Okay
and
let's
see
that's
that
one
that
we
need
to
do
and
this
one
here
also
another
one
from
mukesh,
that
means
that's
fixing
the
drop
action,
so
you
know
either
review
it
say
it's
okay
or
you
know
recuse
yourself
from
the
review
list,
and
we
can.
We
can
let
this
go
too
and
and
then
I
don't
know,
martin
there's
one
here,
but
it
looks
like
it
looks
like
we
have
a
problem
in
the
tooling.
I
There
was
a
sorry
there
was
an
action.
C
I
Yeah,
it
would
be
nice
to
to
kind
of
clean
up
these
v-net
pr's
as
soon
as
possible
and
get
them
done,
because
there
can
be
a
lot
of
energy
putting
into
writing
test
cases
for
these
now.
Keysight
and
other
people
obviously
are
working
on
this.
So
let's
try
to
close
these
things
up
and
we
can
move
on
being
great,
that's
kind
of
all.
I
wanted
to
talk
about.
B
Bring
up
while
the
spell
check
one,
that's
a
controversial
one,
so
yeah.
Basically,
I
had
a
bunch
of
spells
that
are
in
my
prs
and
then
I
went
and
used
a
tool
and
fixed
everyone's
checks
in
the
whole
repo,
and
then
I
added
the
git
action
for
it.
But
the
thing
is
yeah.
If
we
decide
to
take
this
git
action,
if
you
have
a
spelling
error,
it
will
get
flagged
and
also,
if
you
use
a
word
that
is
not
in
the
dictionary
but
necessarily
a
spelling
error.
B
I
Yeah,
so
basically
it's
gonna
run
in
the
pipeline.
A
spell
check-
and
you
know,
given
that
these
are
technical
documents,
there's
tons
of
words
that
a
standard
belt
checker
doesn't
know.
So
you
have
to
build
this
dictionary
right,
and
so
the
implication
of
this
is:
if
we
enable
this,
then
anyone
who
writes
a
readme
is
going
to
be
pestered
by
spell
check
failure.
Until
you
add
the
word
to
the
dictionary
or
you
fix
the
spell
checker
yeah,
the
good
news.
Is
we
fixed
about
400
or
600
errors?
I
A
This
is
a
good
one.
Does
anyone
have
a
strong
opinion
on
this?
Yes
or
no,
you
want
to
speak
out
or
raise
your
hands
or
you
know.
I'm.
H
Yeah,
just
one
comment,
so
I
think
the
spell
checking
is
really
great
too
for
the
ci,
but
I
understand
that
it
will
fail.
You
know
if
everyone
will
try
to
push
with
some.
You
know
word
that
is
not
in
the
dictionary,
but
can
we
make
this
like
trigger
this
job?
Do
not
trigger
error
but,
for
example,
some
warning
so
yeah.
B
H
B
First
thought
about
to
make
it
as
a
warning,
but
git
does
not
allows
you.
Error
fail
in
progress
and
pass
has
no
concept
of
warning
and
people
are
asking
for
this
feature
for
years.
There
is
some
hacky
ways
to
do
where
you
can
do
it.
It
will
show
us
a
fail,
it
will
still
have
the
red
x,
but
it
will
not
block
the
pr.
So
basically,
although
it's
a
failure,
does
not
block
the
pr
and
you
can
still
merge
the
pr
in
that's
something
that
can
be
done,
but
changing
it
to
a
warning.
C
A
I
That's
a
reference
to
yeah
hunting,
witches
right.
C
Yeah,
no,
I
I
agree
that
we
should
keep
the
error
part
because
of
the
fact
that
you
know,
if
you
keep
putting
the
warning
in
there,
we
continue
to.
You
know
what
they
call
tech.
You
know,
accumulate
technical
debt
right
so
and
at
some
point
we'll
have
to
pay
that
debt.
So
would
we
rather
pay
it
now
or
we
pay
it
later
right
and
once
we
have
it
a
lot
more,
then
I
think
it
just
basically
will
be
sometimes
uninsurmountable
part
of
the
thing
that
we
have
to
really
take
care
of
right.
A
I
A
A
I
I
A
Okay,
anyone
else
have
a
pr.
They
want
to
check
out.
A
Great
well
I'll
stop
the
recording
and
we
will
definitely
try
and
run
down
this
issue
we've
been
talking
about
today,
it's
important
so
and
john
thanks
for
bringing
it
up.
You
know
you'll
bring
interesting
insight,
so
thank
you
and
you
guys
have
a
good
day.