►
From YouTube: Network Policy API Meeting 20210823
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello.
Everyone
today
is
august
23rd
2021.
This
is
a
meeting
of
the
sig
network,
api
subgroup
to
sorry,
sig
network
policy,
api
sub
group
meet
to
sig
network.
Remember
this
is
a
cncf
certified
foundation
meeting.
So
please
be
nice
to
each
other.
Let's
have
a
productive
meeting
today,
so
I
added
a
couple
things
on
the
agenda,
mainly
being
there
isn't
really
any
new
issues.
There's
one
open
issue
regarding
deny
action
that
abhishek
had
already
commented
on
we're
waiting
on
feedback
from
them
can't
really
do
much
there.
A
A
A
I
haven't
gotten
any
response,
yet
no
other
issues
there's
actually
been
a
low
number
of
issues
coming
to
sig
network,
which
is
good.
I
guess
the
next
action
item
I
wanted
to
talk
about
is
kind
of
discuss
a
few
things
about
attendance
at
these
meetings
and,
like
maybe
getting
more
people
involved.
A
Is
there
some
other
way
to
inform
the
broader
community
that
there
is
work
to
be
done
here
and
then
the
other
thing
is
just
kind
of
the
three
major
work
items
I
see
as
of
now,
which
is
cmp
network
v2
and
more,
you
know,
investigation,
slash
conclusion
to
the
network
policy
status
field,
so
there's
work
to
do
but
we've,
I
think,
steadily
been
on
the
downwards
trend
of
attendance
here
and
love
to
hear
some
ideas
on
like
what
we
can
do
to
you
know.
Get
people
excited.
C
D
Yeah,
so
I
think
that
from
from
the
beginning
me
and
jay,
we
we've
got
a
proposal
actually
to
move
things
faster
and
jay
was
actually
really
excited
about
making
those
things
based
on
a
controller
doing
doing
stuff
right.
But
we
we've
got
some
good
feedbacks
about
doing
that,
and
don't
doing
that
so
we've
got
folks
from
google
actually
also
creating
that
fqdn
policy,
which
I
don't
know
if
it
went.
D
If
it
move
it
forward
or
not,
but
then
I
guess
we
we've
started
to
to
have
some
loss
of
traction
as
we
started
to
to
dig
into
something
some
stuff
more
complicated
right.
If
you
take
a
look
into
like
the
only
thing
that
we
we
actually
delivered,
that's
already
in
kubernetes
called
is
the
port
range
and
the
name
space
selector,
which
took
almost
one
year
to
get
into
the
code,
and
we
still
don't
know
if
the
port
range
is
going
to
reach
j
or
not,
for
example.
D
So
from
that
perspective,
I
think
that
this
pushes
people
back
a
bit
because
they,
the
expert,
say
like
I
think
that
people
want
to
contribute
and
and
put
their
own
code
into
into
kubernetes
and
help
develop,
and
they
get
a
bit
frustrated
because
we
we
keep
discussing
a
lot
like
what
what
what
what
can
be
made.
How
can
we
make
what
are
the
specifications
and
and
and
and
we
don't-
we
don't
end
like
doing
the
fun
different
part
right.
C
Which
is
connection
to
it
right
I
mean
yeah.
I
spoke
to
both
you
and
andrew
right,
so
there
are
two
parts
right:
the
time
it
takes
to
develop
new
network
policies
and
see
them
working
so
that
yes,
work
on
the
on
the
specs,
but
there's
nothing
there
to
take
them
and
execute
them
right.
I
said
I
haven't
told
everyone,
but
we
will
we
will
start.
We
will
start
up
a
project
to
sort
of
try
to.
Please
don't
put
that
in
the
writing.
C
If
people
listen,
that's
fine,
but
we
will
start
up
a
project
with
the
idea
of
sort
of
filling
in
the
void.
As
we
see
it
was
in
network
policies,
and
one
part
of
the
problem
is
that,
as
I
said,
that
the
cni
should
not
consume
policies,
they
should
consume
something
that
is
there
a
derivative
of
policies.
Call
it
the
logical
firewall
rules
or
something
like
that.
So
I
think
that
to
get
network
policy
where
it
needs
to
be
done,
we
need
to
see
it
as
a
two-year.
C
So
thing
start
pushing
things
in
change
where
things
are
done
and
sort
of
just
like
in
other
areas
have
the
important
code
that
makes
the
important
decisions
and
sort
of
is
the
the
control
of
it
as
part
of
kubernetes
proper
and
not
spread
around
in
different
cmis,
and
the
cni's
should
do
sort
of
sort
of
just
like
that.
We
see
we
can
do
with
capping
now
right,
they
can
render
a
specific
flavor
of
what's
already
decided
and
not
sort
of
own
the
whole
mechanism.
D
Yeah,
I
I
I
think
that
this
is.
This
is
somehow
the
point
right.
Kapin
got
attraction
because
got
some
traction
because,
like
we,
we
already
have
the
actually
the
data
model,
which
is
the
service
and
the
service
api.
It
didn't
change
it
and
people
could
like
implement
whatever
they
want.
So
I
can,
for
example,
implement
copying
for
freebsd,
because
I
know
how
to
deal
with.
We
take.
C
It
future
things
forward
right.
The
problem
we
have
too
many
forks
of
the
cube
proxy
and
slowly
slowly,
diverging
semantics
of
what
it
means
to
do
something
capping
brings
it
together.
So
at
least
you
have
a
working
control
plane,
so
you
get
the
control
plane.
You
also
play
split
and
then
sort
of
in
for
network
policies.
There
is
nothing,
so
I
do
think
that
that
needs
to
be
filled.
We
will
make
a
proposal,
I'm
sort
of
hiring
people
to
do
it
and
we
will
do
completely
open
source
and
sort
of
yeah.
D
Right,
I
I
I
will.
I
will
do
for
example,
I
I
think
that
jace
j
j's
approach
on
the
beginning.
Maybe
we
should
make
that
as
an
attempt
right,
so
this
can
still
be
an
official
sub
project
or
not,
but.
D
Probably
do
something
like
hey:
this
is
how
this
group
actually
has
drawn
the
the
how
we
imagine
cluster
api.
They
are
going
to
be.
E
D
And
we
are
going
to,
we
are
going
to
develop
over
these
and
you
you
may
use
like
our
crds
and
our
and
anybody
can
use
their
own
data
plane.
For
example,.
C
I
right
when
we
look
at
what
we
see
happening
right,
a
capping
sort
of
takes
out.
It
decreases
the
pressure
towards
the
api
server
and
that's
a
lot
of
quality,
distributed
application
and
distributing
data
internally,
I
mean
what
goes
to
which
node
and
so
on
it
handles
internally.
So
we
don't
have
a
everyone.
I
mean
all
the
the
nodes
really
like
crazy
to
keep
track
of
where
the
endpoint
objects
are
and
so
on.
C
C
D
Yeah,
but
don't
you
don't
you,
don't
you
in
this
case?
Don't
you
agree
that
if
we
decide
right
now
to
say,
hey,
okay,
we've
got
what
we
could
get
done
in
this
last
year
with
these
two
things
inside
the
code
and
now
I
think
we
think
that
we
should
attack
like
cluster
network
policy,
but
we
are
gonna
do
our
way
and
our
way
is.
D
C
I
think
it's
important
that
we
get
to
that.
We
have
a
system
where
it
should
be
easy.
I
call
I
call
sort
of
that.
We
can
add
in
new
sort
of
policy
types.
They
should
be
very
easy
to
describe
because
they
in
reality
they
describe
call
it
relations
between
objects
or
set
of
label
groups
right,
so
those
should
be
easy
to
add
in
and
then
we
have
another
part.
C
I
call
it
algorithm,
but
sort
of
that
that
you
update,
I
mean
we
have
a
perfect
example-
is
the
flow
chart
that
shows
the
interaction
between
the
crds
and
the
network
policies
right
that
you
should
be
able
to
describe
or
write
code
for
that.
That
then
can
work
over
the
different
policies,
because
the
policy
types
interact
between
each
other.
So
we
need
a
framework
where
someone
can
come
and
say.
C
Oh,
I
want
to
do
a
new
policy
and
test
out,
define
the
policy
and
then
be
able
to
point
to
paint
the
graph
right
and
then
also
put
in
an
algorithm
that
generates
rules
based
on
this
and
the
these
rules
goes
into
logical
firewall
and
it's
the
firewall
that
this
dni
implements
and
we're
working
on
this
sort
of
describe
it,
and
so
we'll
come
out
with
sort
of
a
percentage
wider.
I
would
say
within
a
month
and
a
half
two
months.
A
I
mean
I
think
that
sounds
interesting
too.
I
I
mean
more
immediately,
like
I
agree
with
ricardo
in
the
sense
that,
like
I
I
know,
I'm
not
going
to
speak
for
average
check,
but
we've
been
going
back.
I
mean
we've
been
discussing
cmp
now
like
constantly
down
right
like
weeks
and
like
that
is
not
fun
like.
I
want
to
write
code
at.
C
C
Of
them
without
having
to
talk
to
every
cni
provider
and
get
them
to
do
it,
we
need
to
get
that
framework
in
in
place.
I
don't
from
one
perspective,
we
will
do
our
stuff,
we'll
use
it,
but
if
someone
comes
with
something
better
and
put
in
there,
I
don't
want
to
own
that
code.
I
just
wanted
to
be
there
and
I
want
to
implement
them
the
plugin
to
the
firewall,
where
I
can
drive
the
rules
into
to
the
networking.
We
do.
C
So
when
you're
doing
that,
when
you
do
no,
so
this
you
need
to
do
it
with
the
sort
of
go
from
address
and
up
in
the
data
plane
to
something
that
I
we
call
them
nodes
that
that
the
sort
of
these
rules
works
on.
So
you
get
the
sort
of
the
opposite
office
of
his
data
structures,
but
I
think
it
needs
to
be
following
the
model
that
captain
is
doing,
that
that
you
have
the
same
sort
of
breakdown.
So
perhaps
we
can
take
the
communication
mechanism,
that's
used
in
there
and
so
on,
but.
F
So
I
just
want
to
understand
a
little.
You
know
the
conversation
here,
the
the
comparison
with
keeping.
Basically
what
we
have
with
the
thing
is
that
we
have
an
api
at
the
top
layer,
which
is
the
service
apis
and
and
then
we
have
a
controller
which
sits
in
kubernetes,
which
converts
these
services
apis
and
creates
end
points
for
for
these
services,
and
then
the
idea
with
I
have
not
followed
keeping.
F
So
with
my
understanding,
the
idea
is
that
we
have
providers
for
these
implementations
so
that
it
can
be
implemented
via
ip
tables
or
some
other
exactly
so,.
C
You
have
you,
have
you
have
a
shared
control
plane
that
also
scales
really
well,
it
helps
distribution
and
then
you
can
plug
in
data
planes.
So
if
you
want
to
do
a
specific
type
of
data
planes
based
on
whatever
you
want,
you
can
plug
in
a
a
model
that
generates
the
rules
into
that.
In
any
way
you
want,
if
you
don't
want
to
use
dna
than
snap,
you
want
to
use
on
tunnel
and
so
on.
You
put
it
in
there.
So
it's
so
here's!
So
here's.
F
The
difference
between
what
their
model
is
and
what
we
have
so
far.
We
have
the
network
policy
api,
but
we
don't
have
the
cluster
network
policy
api.
So
that's
one
thing
that
we
need
to
design
regardless
of
what's
implemented.
So
that's
different
services.
It's
already
there
there's
nothing
new
to
design
there.
So
that's
why
it's
going
to
take
time
for
us
to
all
agree
upon
what
the
api
should
look
like.
F
That's
one,
then
we
have
that
controller
layer
which
is
missing
today.
The
controller
and
the
provider
is
is,
is
like
meshed
into
a
single
component,
which
is
provided
by
the
cns,
and
that
makes
sense
because
you
know
the
networking
in
the
end
is
actually
provided
by
the
cni's
and
they
do
it
in
different
ways.
So
that's
why,
if
you
want
to
allow-
or
if
there
are
certain
rules
which
is
allowing
certain
traffic,
you
need
to
create
rules
about
them
to
firewall
them.
Well,.
D
F
So
the
so
there's
one
thing
that
we
can
do
is
that
perhaps
we
can
introduce
a
controller
in
between
which
sits
in
kubernetes
and
its
job
is
only
to
do.
Its
job
is
only
to
calculate
the
pods
which
are
affected
because
of
this
this
policy,
and
but
how?
How
you,
how
you
realize?
How
do
you
create
a
rule
for
for
these
set
of
pods
to
these
set
of
pods
and
what
kind
of
action
you
that
is
delegated
to
the
cnn?
Is
that
no,
I
really
have
the
model
so.
C
F
C
D
Your
point
is
that
maybe
it
makes
sense
like
if,
if
we
have
a
different
api
for
network
policy,
let's
say
like
v2
or
something
like
that,
and
we
have
a
a
common
thing
that
calculates
all
the
policies
and
then
the
the
cni
is
just
just
responsible
for
applying
those
policies.
This
makes
sense.
C
But
actually,
I
think
there
needs
to
be
something
in
between
that's.
Basically,
I
don't
think
the
cni
should
care
about
network
policies,
they
should
care
about
firewall
rules
and
we
need
to
calculate
from
the
network
policies
down
to
these
firewall
rules,
and
that
makes
it
possible
yeah
to
then
use
any
sort
of
new
network
policy
definition
we
want
to
have.
We
can
add
new
policies
and
new
algorithm
that
generates
to
this
firewalling
api.
F
D
A
Was
like
how
can
we
to
maybe
to
make
it
a
little
more
exciting,
fast
moving?
Maybe
paralyze,
you
know
the
api
discussion
with
like
an
actual
implementation
or
like
a
sandbox
implementation.
I
don't
know
if
that
would
help.
So
my
angle
with
k-ping,
I
had
never
looked
at
it,
but
I
was
like
okay,
we're
making
cmp.
A
Why
don't
we
loosely
agree
on
a
on
a
yaml
and
then
like
run
out
and
try
to
implement
it
and
then
use
it
for
a
little
bit
and
say:
okay,
this
doesn't
work
at
all.
We
need
to
iterate
further
on
the
api
right
rather
than
like
spend
a
year
on
the
api
implement
yeah
right.
If
you
look,
if
you
look
at
the
gateway
api
thing,.
D
It's
it's
like
being
developed
like
from
two
years
to
right
now
and
they
are
in
view
on
alpha
one,
but
they
are
changing,
and
this
is
the
idea
of
being
in
alpha
right,
because
you
you,
you
might
like
be
hey.
We
developed
something
that
doesn't
apply.
So
I
think
that
sometimes
we
should
like.
I'm
gonna
have
some
regrets
about
that.
If
someone
watches
this,
but
sometimes
you
need
to
ask
less
and
do
more
yeah
right.
D
So
I
think
that
we-
maybe
one
thing
that
that
we
that
that
jay
actually
is
right,
is
that
we
got
traction
because
we
we
decided
hey.
We
are
going
to
do
this
way
and
if
this
applies
to
kubernetes,
okay,
otherwise,
okay
as
well-
and
this
is
how
copying
is
going
right.
So
we
can
see
like
mikhail,
do
mikhail
conducts
in
his
way
and
and
they
are
doing
some
experiments
and
then
they
got
traction
for
windows
side.
So
I
think
I
think
we
should
probably
do
the
same.
D
D
What
we
want,
I
personally
I
I
I
can
see
a
reason
why
we
shouldn't
start
like
developing
something
that
you
can
add
those
policies
and
they
can
be
calculated
on
the
other
side
to
something
that
became
into
firewall
rules,
the
same
the
same
way
as
per
told
or
whatever
right,
but
at
least
we
are
going
to
have
some
some
fun
and
we
are
going
to
have
something
fun
actually
to
people
to
take
a
look
after
our
like
gateway.
Api
is
still
an
api
and
we
have
each
ingress
controller
needing
to
implement
their
own
way.
D
A
I
think
it's
an
interesting,
I
mean
I,
it
is
still
two
separate
things
right:
the
api
discussion
with
cmp
and
this,
but
the
concept
of
of
boiling
down
paul,
like
any
policy
cmp
np
npv2
to
like
a
single
set
of
rules
that
can
be
understood
by
cni,
is
kind
of
interesting
and
can
be
implemented
across
various
things.
I
don't
know
we're
kind
of
just
bike
shedding
here,
but
yeah.
F
A
F
Think
there
are
like
other
things
also
which
we
should
try
to
target
like,
for
example,
the
status
is
something
which
is
like
a
low-hanging
fruit
right
right
and
but
the
problem
is
that
we
need
someone
to
drive
it
forward
and
I
think
we
are
missing
that,
and
you
know
if
someone
on
the
call
here
or
someone
who
watches
this
video
later
on,
can
take
it
and
because
I
remember
there
was
someone
who
was
interested
in
solving
that
for
network
policies.
It.
A
F
F
B
F
That's
the
that's.
The
part
where
we
have
to
you
know
make
do
with
the
tuition
in
the
in
this
community
as
well.
So
so,
if
there's
someone
who
can
you
know,
there's
also
another
cool
idea
about
service
account
selectors
in
in
in
the
network
policies?
F
There's
also
the
service
like
using
services
within
kubernetes
network
policy.
So
those
are
the
things
that
I
think
you
know
if
we
just
take
them
and
move
forward.
I
think
there
will
be
some
fun
for
people
to
discuss
on
this
call
as
well,
and
I
think
there
were
some
implementable
caps.
You
know
once
we
fix
some
of
those
nitty
gritties
of
those
kept.
So
those
are
a
few
things
that
you
know.
F
F
Andrew
wanted
to
get
a
bit
like
also
deep.
Oh
sorry,
I'm
gonna
discuss
a
bit
about
that
right
later
on
right,
right,
right,.
G
Like
I'm
part
of
the
next
team
as
well
so
yeah,
we
are
actually
trying
to
work
on
service
directors,
but
it
was
actually
a
put
on
hold
because
we
were
actually
discussing
about
medical
v2
and
how
it
how
it's
going
to
affect
how
it's
going
to
influence.
Maybe
I.
D
A
I
mean,
I
think,
there's
I
haven't
been
doing
it
enough,
but
I
think
there's
a
pretty
easy
solution
to
that.
If
we
just
decide
if
we're
going
to
fail,
open
or
fail
close
and
move
on
down
the
road
right,
but
it
it
is,
it
just
takes
someone
to
sit
down
and
start
writing
the
cap.
Unfortunately-
and
I
don't
have
time
to
do
that
and
that's
what
that's
what
it
boils
down
to
is
like
more
involvement,
more
people
who
are
excited,
I
mean
kubernetes.
A
Networking
security
is
like
such
a
hot
space
and
I
think
there's
a
lot
of
excitement
around
it.
But
in
this
working
group,
specifically
like,
like
ricardo,
said,
like
things
move
really
slow,
so
I
think
thinking
about
how
we
can
like
sandbox
and
create
maybe
a
tool
to
sandbox
or
something
fun.
It
could
gain
traction
pretty
fast
and
get
a
lot
of
people
here
right
and
get
more
done.
D
Yeah
the
thing:
the
thing
is
that
when,
when
we've
been
designing,
the
department
team
tim
hawkins
said
something
that
we
should
be
aware
of,
which
is,
we
are
not
dealing
actually
with
something
that
cannot
that
we
are
dealing
with
something
that
deals
with
the
cluster
security
right
on
the
workload
security,
and
in
that
sense,
if
we
do
something
wrong
we
might
have
in
with
some
cve
or
something
like
that,
and
I
guess
this
is
why
people
are
really
careful
about
what
we
do
with
network
policy.
D
I
agree
with
this
approach
so,
but
I
also
agree
when
then
winship
says
that
maybe
we
should
maybe
we
we
we
have.
We
have
nothing
else
to
do
with
network
policy.
If
you
want
other
than
start
designing
the
network
policy
v2
right,
but
we
we
can't.
We
can't
keep
waiting
for
like
everybody
to
say
if
this
is
yeah.
This
is
good,
or
this
is
not
good,
so
I
I
think
that
we
should
just
say:
hey.
Okay,
let's
start,
maybe
when,
when
abc
and
the
other
folks,
they
end
designing
the
cluster
network
policy.
D
We
can
get
some
lessons
from
from
the
cluster
network
policy
to
network
policy,
v2
and
say:
hey
okay.
So
this
is
going
to
be
the
crd
for
network
policy
v2
and
we
are
going
to
have
some
controller
here.
That's
going
to
that's
going
to
do
all
of
the
rules,
calculations
and
if
cni
is
going
to
rely
on
that,
you
just
need
to
get
this
data
structure
and
apply
the
policy,
as
is
the
way
that
everyone
in
in
your
right
so.
F
I
think
nadeem
and
prasad
are
kind
of
like
looking
into
the
v2
as
we
speak,
so
so
maybe
we
should
also
see
whether
is
there
anything
that
we
can
help
on
that,
and
perhaps
you
know
they
have
like
further
iterations
or
what
what
they?
What
do
they
feel
about
the
v2
yeah.
H
Yeah
yeah
abhishek
yeah.
I
think
you
know
when
we
were
just
doing
the
side
chat
with
nadeem
on
you
know
when
you
guys
are
talking
so
you
I
don't
know
the
the
reason
we
slow
down
on
the
v2
is
like.
No,
we
wanted
to
see
how
cnp
is
evolving,
because
some
of
the
problems
are
common.
So
hence,
like
you
know,
we
slow
down
there,
but
you
know,
I
think
you
know.
I
agree
with
the
ricardo
point.
H
D
H
Right
so
so
I
mean
you
know
there
could
be.
You
know,
disagreements
on,
like
you
know,
between
the
way
we
were
proposing,
like
you
know,
because
some
of
the
things
are
a
little
controversial
right.
Like
you
know,
we
are
trying
to
say
you
know
endpoint
one
and
point
two.
Whereas,
like
you
know,
we
are
talking
grace
and
egress,
but
you
know
we
can
just
huddle
around
in
line
and
say:
hey,
no
fine,
you
know
we,
you
know,
let's
say
you
know
we
just
go
after.
You
know
start
implementing.
H
You
know,
because
if
you
look
into
other
existing
policies
right
either
calico
or
celium
or
even
juniper,
they
are
doing
like
you
know,
endpoint.
You
know,
you
know
endpoint
related
things,
so
so
that
means
it
has
enough.
H
F
F
You
know
because
network
policy
has
been
present
for
many
years,
so
maybe
we
take
those
use
cases
and
we
just
now
define
or
or
implement
those
use
cases
with
the
yamus
that
your
proposal
kind
of
solves,
and
then
we
can
see
why
this
is
an
improvement
over
network
policy
v1
and
how
these
solve
and
how
simple
it
is
to
use
v2
and
how
extensible
it
can
be
with
v2.
I
think
that
would
be
a
good
start.
What
do
you
think
yeah?
No,
I
think
I
agree
with
you.
F
A
So-
and
this
is
already
a
start
on
that
kind
of
your
doc-
you
already
started
working
on
sorry,
I
I
missed
it.
Can
you
repeat
that
this
your
document
network
policy,
v2
motivations,
is
already
kind
of
a
good
start,
a
good
place
to
keep
putting
ideas
down
and.
H
True,
true
true,
so
you
know
we
can
you
know
abhishek,
like
you
know,
maybe
on
these
you
know
right
after
this
caller
sometime,
you
know
we
can
get
the
ammo
files
because
I
I
was
missing
like
in
a
couple
of
last
meetings.
I
was
on
pto,
so
we
can
take
some
of
the
ml
files
which
you
are
creating
and
we'll
convert
into
v2
sway,
and
you
know
then
you
know
we
can
start
come,
compare
and
contrast
as
you're
proposing.
H
F
Fair
yeah-
and
I
think
you
know
you
know
this
last
week
or
so
so
we
are
also
learning
in
the
process,
because
we
have
also
evolved
a
network
policy
cnp
api
and
the
the
way
we
present
it
in
so
many
ways
over
the
last
few
months,
and
so
I
think
you
know
on
friday
and
today
andrew
and
I
did
some
changes
tweaks
to
the
way
we
want
to
the
way
we
want
to
showcase
our
api
design,
and
I
think
a
couple
of
things
that
we
did,
I
feel,
is
an
improvement.
F
F
Then
we
we
have
like
yamls
based
on
each
use
cases,
and
then
andrew
has
done
like
these
nice
use
case
diagrams
for
every
use
case,
and
I
feel
that
it's
much
more
clear,
clearer
now
and
I
think
once
we
go
through
that,
maybe
maybe
if
everyone
feels
that
this
is
much
more
clearer
way
to
express
our
intentions,
then
then
I
guess
you
can
follow
the
same
suit
and
that
will
help
you
know
you
can
do
a
tabular
form
form
between
the
v1
versus
v2.
How
v2
is
better-
and
you
know
those
kind
of
things.
H
E
No
yeah
yeah
what
the
main
thing
why
we
took
a
break
or
took
a
pause
here
right.
One
of
I
mean
out
of
two
major
changes
in
b2.
One
is
related
to
how
the
end-to-end
policy
work
and
other
is
the
priority
model.
Priority
model
was
in
a
complete
overlap
with
cnp,
and
since
we
were
discussing
priority
model
in
cnp
for
last
four
weeks
right,
I
think,
unless
that
is
closing,
because
we
don't
want
to
propose
something
in
cnp
and
something
different
in
v2,
because
it
is
going
back
to
back
right.
H
Yeah-
and
I
mean
you
know
not
to
say
anything,
you
know
wrong
with
what
we
are
doing
so,
but
you
know
we
have
you
know
almost
I
don't
know
like
you
know:
10
10,
20,
plus
years
of
experience,
combined
right
in
in
firewalling
area,
and
we
are
not
able
to
close
things.
You
know
pretty
quickly
is
little
surprising
to
me.
H
You
know
I
mean
I
don't
know,
maybe
because
of
too
much
of
legacy
we
have
in
our
head,
but
you
know
I
was
a
little
surprised.
Like
you
know
the
way
things
are
going
pretty
slow.
You
know
yeah.
B
I
agree
with
at
the.
H
End
of
the
day
you
know,
nadim
has
you
know
more
than
I
don't
know,
six,
seven
or
maybe
close
to
10
years
of
experience
in
firewalling.
I
I
have
similar,
you
know,
and
you
know
I'm
pretty
sure
andrew
and
you
know
ricardo.
Has
you
know
that
level?
If
you
see
like
it's
almost
25,
30
or
even
40
years
of
you
know
good
experience
in
this
area,
we
were
not
able
to
shake
it
off.
It's
like.
A
Rising
so
I
mean,
I
think
personally,
like
something
that
will
help
for
the
next
one
is
having
like
more
concrete
deadlines,
and
you
know
if
there
is
disagreements
with
what's
being
put
forward
like
we
have
to
have
deadlines
for
posting
responses.
A
Those
disagreements
right
we've
kind
of
been
going
in
circles
in
a
lot
of
ways,
and
so
it's
given
me
some
ideas,
like
I
think,
moving
forward,
especially
with
the
network
policy,
api
repo,
how
we
can
document
how
this
process
should
work
when
looking
at
how
it
hasn't
worked
with
cmp
and
that
we
have
a
rigid,
somewhat
rigid
framework
work
for
api
development
within
the
sig
network
policy.
Api
subgroup
and
it'll
make
it
easier
so.
H
Got
it
got
it
okay,
so
so
I
think
nadim
and
myself
will,
you
know,
start
circling
with
abhishek
and
you
know
ricardo
and
others
so
that,
like
you
know,
we
will
try
to
put
v1
versus
v2.
You
know
on
the
same
spreadsheet
or,
like
you
know,
side
by
side
and
see
you
know
whether
we
are
making
some
improvements.
If
so,
how
and
what?
What
are
the
additional
things
we
are
solving?
You
know
in
me
too,
right.
A
Cool
thanks,
okay,
so
last
20
minutes
we
can
look
at
what
abhishek
was
talking
about.
I
guess
with
cmp
so
here's
the
current
status
with
cmp
we
are
still
kind
of
stuck,
I
would
say,
on
implementation
and
so
far,
we've
boiled
it
down
to
three
major
categories
of
this
is
a
good
pros
versus
cons
of
all
three
abashek.
Do
you
think
we
should
run
through
this,
or
should
we
run
through
kind
of
what
we've
updated
on
the
powerpoint.
A
Yeah,
so
this
is
where
we're
kind
of
stuck.
We
as
a
group
agreed
generically.
I
want
to
say
I
I
think
everyone
agreed.
I
hope
everyone
agreed
if
they
didn't
agree.
We've
commented
on
this
already,
hopefully
on
these
use
cases
for
cmp,
or
we
know,
they're,
not
exhaustive,
but
we
they
seem
to
be
the
most
common
use
cases
and
that's
kind
of
our
base
point
right
in
the
future.
These
use
cases
should
be
approved
before
we
even
move
on
to
api
design
and
they
would
be
approved
in
the
sig
network
policy.
A
Api,
repo
and
everything
would
flow
from
there
right.
The
use
cases
are
the
root
of
truth
for
any
api.
They
should
be
and
the
personas.
A
So
I
think
that's
what
we've
done
so
far,
and
now
we
are
basically
locked
in
on
the
two
different
types
of
solutions:
I.e,
implementations,
so
three
actions
versus
priority
ordering,
and
so
what
we've
done
so
far
is
we
took
a
first
stab
at
making
sample
yamls
for
each
option
for
each
use
case.
So
that
is
the
part.
That's
not
done,
but
what
is
done
is
clarification
of
these
use
cases.
So,
as
you
can
see
here
now,
we
walk
through
every
use
case,
they're
actually
indexed
on
this
slide.
A
We
walk
through
the
use
case
explicitly,
and
then
we
also
have
an
explicit
diagram
for
each
use
case.
So
first
one
is
isolate
pods
carrying
sensitive
data
from
namespace
name,
sensitive
name,
space
from
all
their
name
spaces.
This
is
basically
a
strong
deny
use
case
right.
The
cluster
admin
wants
to
be
able
to
fully
block
pods
in
a
sensitive
namespace
from
everything
else.
That's
a
pretty
straightforward
use
case.
A
What
we've
done
now,
though,
is
add
diagrams
here
to
adequately
represent
it,
and
then
what
we
want
to
do
is
have
yamls
right
after
those
diagrams
that
match.
What's
going
on
so
like
how
do
we
implement
this
for
priority
ordering
based
yaml
versus
three
action
base?
Yml?
Does
that
make
sense
to
everybody.
F
Yeah
one
of
one
of
the
things
that
I
you
know,
I
missed
a
couple
of
meetings
in
the
last
few
weeks,
and
so
I
I
did
not
know,
I
don't
know
whether
we
all
agree
upon
those
use
cases
or
not,
and
that's
why.
I
think
this
visual
representation
kind
of
helps
at
least
to
clarify
what
we
really
mean
by
this
use
case.
F
So
I
think
I
feel
it's
it's
clearer,
but
you
know
other
guys
on
the
column
if
you,
if
you
think
that
this
is
something
still
not
well
understood,
and
we
should
correct
that
because
in
the
end
I
mean,
if,
if
we
have
difference
of
understanding
about
a
particular
use
case
and
our
animals
are
not
going
to
make
sense.
A
So
I
tried
to
boil
this
down
to
somewhat
simpler
of
a
representation,
so
4a
is
strictly
deny
international
space
traffic,
but
delegate
public
service
to
be
maintained
by
network
policy
or
or
allowed
or
denied
via
network
policy.
So
a
couple
things
to
highlight
here
is:
you
can
see.
We
have
two
tenants,
one
foo
one
bar
and
they
are
surrounded
by
an
overrideable
deny
right
so
by
default.
A
A
E
D
A
Excluded,
so
that's
a
big
this
that's
like
kind
of
the
decision
we
have
to
make.
What
what
do
we
think
is
easier
to
understand
so
sanjeev
attacks
this
as
he's
not
here-
he's
not
able
to
be
here
today,
but
he
attacks
us
as
a
delegation
is,
let's
not
match
on
it
versus
abhishek
and
yang
kind
of
match
on
it,
as
we
want
to
explicitly
say
that
we
want
to
delegate
traffic
to
service
plug
pub
and
our
role
as
a
group
is
to
say
what
is
easier.
What
is
simpler,
what
makes
more
sense
right.
E
H
So
so
I
missed
this
one,
but
is
this
the
one
after
the
network
policy
evaluation?
You
know
this
would
get
evaluated.
Is
that
d1.
E
If
you
think
from
the
sanjeev's
proposal
on
the
way
we
have
put
in
v2
yes,
but
if
you
want
to
explicitly
code
delegate
in
the
yaml,
then
there's
the
alternate
proposal,
I
think
from
happy
shape.
I
think
right.
H
So
so
I
mean
you
know
not
to
bring
a
different.
You
know
different
angle
to
this.
Typically,
you
know
what
we
have
seen
is,
like
you
know,
pre
and
post.
That
is,
you
know
that
is
used
by
customers
right.
So,
where
you
know
we
say
the
post
is
what
what
are
the
things
to
be
done?
H
It's
clear
that,
like
you
know
in
the
ammo,
you
can
specify
hey.
This
is
what
I'm
doing
post
so
would
that
help.
E
The
posts
are
part
of
the
discussion.
Yes,
I
mean
in
v2,
we
have
put
it
like
pre
and
post,
but
even
even
in
the
current
proposal.
Right,
if
you,
if
you
think
these
precedence
option
or
the
or
the
bucket
option
right,
this
will
be
like
a
something
which
is
delegated
right
will
come
in
after
after
the
network
policy
is
evaluated.
A
Yeah,
so
basically,
these
yellow
lines
mean
that
the
traffic
to
service
pub
will
be
allowed
or
denied
by
a
network
policy.
A
normal
network
policy
right
it'll
fall
through
this.
This
cnp
border
that
we've
put
around
the
tenant.
E
Yeah,
I
think
we
we
should
have
sanji
for
the
discussion,
but
I
mean,
if
I
have
to
put
my
view.
I
I
like
this
proposal
because,
typically
in
the
firewall,
we
don't
code
for
the
delegate,
we
let
people
add
that
rule
up
front
or
before
the
before
the
broader
rule.
They
want
to
add
any
exception,
which
in
this
case
is.
A
E
No,
I
mean
if,
if
I
have
to
implement
in
a
typical
firewall
right,
I
can
just
say
the
night
day,
night
traffic
without
any
without
any
mention
of
public
service,
but
somebody
can
put
a
rule
just
above
this
or
anywhere.
Everyone
is
rule,
saying
that
tenants
who
can
access
service
pub.
E
Yeah,
that's
an
explicit
allow,
but
you
know,
even
though
the
same
traffic
is
matched
by
these
two
rules,
the
one
broader
rule
which
says
nobody
can
talk
to
tenant
bar
right.
F
E
E
H
E
F
E
That
is
true,
but
we
are
instead
of
poking
holes.
We
are
if
we
evaluate
that
the
broader
policy
after
the
name
space
policy
right
that
gives
namespace
administrators
an
option
to
override
those
policies,
something.
F
Service
pub
is
just
one
service
within
the
bargaining
space,
but
the
other
workload
should
not
be
able
to
talk
to
each
other
and
that's
a
strict.
No,
that's
a
that's
a
higher
level
cluster
level
policy.
So
in
your
proposal
it
will
be
of
the
cluster
admin
bucket
right.
The
pin
yes
right,
but
now
you
need
to
carve
out
a
hole
out
of
that
to
be
able
to
do
the
service
pop
traffic
so
that
you
can
delegate
that
to
using
the
lower
priority
lower
bucket
rules
that
correct
otherwise.
D
F
E
Okay
right
so
yeah,
so
that's
our
discussion
from
right.
The
two
discussion
points
like
one
in
the
sanjay's
proposal
he's
not
coding
the
holes.
He
he's
just
saying:
okay,
I'll
exclude
these
things,
correct
the
exclusion.
Yes
other
option
is
you
put
you
you
code
the
holes
and
let
the
network
administrator
sorry,
the
name
space
admins
to
do
something
with
the
whole
right
whether
to
allow
or
deny
now
this
coding
the
whole.
That
is
something
very
new.
In
my
opinion,
right.
We
don't
typically
code
for
the
holes
like
this
delegate
mechanism
is
very.
F
The
thing
is
that
you
know
we
are
not.
We
don't
have
like
separate
grouping
constructs
we
don't
have.
We
are
not
like
putting
ip
addresses
in
here
that
it's
easier
to
exclude
certain
ip
addresses
from
from
that
list
or
that
group
we
are
using
label
selectors
and
label.
Selectors.
Are
it's
harder
to?
Oh?
No,
it's
not
harder.
It
just
becomes
like
a
complex
mechanism
and
that's
why
we
have
to
work
towards
that
proposal
and
that's
the
only
thing
I
mean
otherwise.
F
E
Unless
you
do
it,
it's
very
complex
to
do,
because
if
you
put
a
then
a
I
mean,
if
you
put
on
one
side,
then
it
does
not
really
capture
what
hole
you
want
to
actually
allow
right.
But
I
was.
E
I
understand
where
I'm
I'm
maybe
changing
the
use
case,
but
do
we
have
to
have
the
use
case
where
the
cluster
network
admin
only
allows
the
network
police
network
name
space
admin
to
make
a
decision
only
in
that
particular
hole,
or
we
can
give
a
little
broader
that
I
will
put
from
a
clustered
policy.
I
have
a
recommended
denies
but
I'll
make
it
open
for
you
to
decide.
If
you
want
to
do
something
different
in
your
name,
space.
F
Sorry
go
ahead
now,
I'm
just
going
to
say
that
you
know
you
mentioned
that
you
won't
be
able
to
you
can
solve
it
using
priority
waves.
It's
just
that.
You
know
the
yaml
looks
complex
and
that's
the
only
thing
and
in
the
end
we
want
to
make
sure
that
we
want
to
make
sure
that
the
things
that
we
are
able
to
write
as
administrators
is
simple
and
it
solves
all.
These
cases
sorry
go
ahead
and
do.
A
Yeah-
and
you
know
the
question
I
think
has
boiled
down
to
do
you
write
these
pink
boxes
and
explicitly
write
them
so
that
there's
two
little
gaps
in
them
or
do
you
write
these
pink's
boxes
and
then
write
these
yellow
lines
like
I'm
trying
to
really
dumb
it
down,
and
you
can
see
that
explicitly
here
right.
This
is
priority
ordering
this
is
explicitly
writing
those
pink
boxes.
A
It's
saying
we're
going
to
isolate
that
tenant
which
is
tenant
bar
and
we're
gonna,
isolate
it,
except
for
the
pods
that
are
backing
the
public
service
right
now.
In
my
opinion,
I
agree
that
this
is
probably
what's
more
traditionally
done
in
firewalling,
but
I
think
writing
the
pink
box
with
holes
in
it
is
more
confusing
than
explicitly
just
writing
the
pink
box,
which
is
basically
this
the
strict
and
eye
checked.
All
the
other.
Namespaces
is
right
here.
A
It's
a
deny
action
and
then
writing
the
empower
rule
to
explicitly
you
know,
write
those
yellow
lines
and
that's
just
my
opinion
in
terms
of
readability-
that's
that's
kind
of
where
I'm
at
on
it.
A
I
don't
know,
though
I
mean
that
it
is
a
tricky
thing,
but
I
think
that
that's
where
it
boils
down
to
is:
do
you
write
the
yellow
lines,
or
do
you
write
the
pink
boxes
with
the
with
the
holes
already
in
it,
and
and
writing
that
with
the
yaml
is
a
little
bit
more
complicated
and
we
could
also
say
like
this-
isn't
a
valid
use
case,
but
I
think,
generally
from
our
discussions
around
cnp.
E
A
E
E
That
will
be
acceptable
and
definitely
very
easy
to
manage.
But
if
this
this
particular
use
case
that
cluster
network
policy
will
only
give
network
policy
admin
a
whole
to
manage
that
that
is
going
to
complicate
the
thing,
because
that
kind
of
things
are
not
done
today
in
any
any
like
in
the
network
policy
or
in
the
legacy
firewalls.
F
I
mean
it's
not
just
a
hole
for
the
service,
public
or
public
services
and
other
things
or
name
spaces
like
monitoring
or
other
namespaces
system
namespaces,
but
it's
also
intra
name
space
traffic.
You
may
you
may
not
want
your
internet
space
traffic
to
be
always
allowed.
So,
for
example,
though,
within
that
phone,
ns1,
n2
and
s2,
you
may
not
always
want
that
to
be
explicitly
allowed,
so
that
should
also
be
dedicated
and
that's
a
very
common
use
case.
E
E
The
combination,
the
thing
is
whether
we
can
like
if
the
cluster
network
policy,
the
administrator
level,
they
can
put
this
broader
rule
and
if
they
trust
that
the
network,
the
namespace
admins,
will
only
override
because
it's
the
whole
they
are
opening
on
their
name
space
right
only
what
they
are
intentionally
opening
from
the
cluster
network
policy
level.
Right.
If
we
have
the
trust,
then
it
is
much
simpler
to
just
put
it
in
a
priority,
but
if
we
don't
trust
them
and
we
want
to
implicitly
only
give
them
option
of
opening
that
hole.
E
E
Policies
and
after
the
device
level
policy,
but
we
trust
the
device
level
administrator
if
they're
putting
anything
there.
They
know
exactly
what
they're
doing.
They
can
definitely
not
override
something
which
is
before,
but
they
can
override
something
is,
after
now,
by
overriding
something
which,
after
if
they,
if
they
make
a
complete
blunder,
then
yeah
they
can
do
it.
That's
the
part
I
think
which
delegate
tries
to
solve,
but
I
mean
that
that's
that
that's
the
decision,
whether
whether
we
can
give
this
kind
of
control
to
the
namespace
admins
or
not,.
F
So
I
think
we
both
are
trying
to
solve
the
delegate
use
case
right.
It
is
just
how
it's
being
solved
yes
right.
In
one
case,
we
are
explicitly
providing
that
action.
In
the
other
case,
you
are
carving
out
things
because
you're
then
not
writing
a
more
respective
deny
you're.
Actually
writing
a
deny,
which
is
composite
of
a
lot
of
rules.
H
So
how
about
if
you
go
iterative
right
in
the
sense
initially
you
you
carve
out
in
such
a
way
that,
like
you
know
whatever
is
needed
in
the
cn
I
mean
in
the
cluster
level
and
if
we
are
finding
it
hard,
then
you
know
we
can
introduce
the
you
know
whole
mechanism.
H
But
if
you
do
the
other
way,
then
you
know
you
don't
have
the
opportunity.
Basically,
now
you
can
iterate,
you
know
whether
the
really
providing
the
hole
is
required
or
not.
F
You
know
that
doesn't
solve,
I
mean
so,
for
example.
The
reason
why
we
are
thinking
of
v2
is
because
we
are
not
able
to
iterate
on
v1
once
we
have
certain
things
so
like
then,
the
question
of
fail
open
versus
fail,
close
come
in,
and
so
we
need
to
think
carefully
when
we
are
trying
to
do
something
like
that.
In
that
sense,.
H
No,
I
don't
mean,
like
you
know,
the
cnp
will
be
closed
but
at
least
like
you
know,
we
make
progress.
Then
you
know
during
this
course
right.
We
realized
that.
Okay,
fine,
you
know,
we
do
see
a
need
to
create
a
you
know,
whole
mechanism,
I
mean,
then
then
you
know
we
start
introducing
that
see.
Otherwise
you
know
I
mean
we
know
this
is
the
you
know
we
are
agreeing
on
the
problem.
We
are
not
agreeing
on
this.
How
we
are
providing
the
solution
right.
H
So
so
does
this
change
any
aspect
you
know
once
you
go
on
the.
H
The
the
other
way
of
doing
things
where,
like
you
know,
with
give
the
priority
orders
or
the
rule,
numbers
and
stuff
like
that.
A
Certain
pieces
right,
I
think
both
both
solutions
can
implement
every
use
case
right.
Priority
ordering
can
lead
to
some
yaml
explosion.
Three
action
can
lead
to
some
yama
explosion.
One
of
the
major
things
of
priority
ordering
is
something
to
remember
that
I've
always
found
interesting
in
this
discussion
is
like
abhishek
and
yang.
A
So
say
you
have
a
policy
at
priority,
10
with
10
rules,
and
then
you
have
another
priority,
and
rules
are
only
ordered
based
on
the
order
in
the
ingress
egress
sections
like
there's
an
unlimited
number
of
priorities
that
can
go
on
there,
because
priorities
are
floats.
Am
I
saying
that
right
on
avashek.
F
The
other
thing
is
that
you
know
it's
visually
very
hard
to
figure
out
that
by
just
looking
at
yamos-
and
you
know,
we
have
dashboard
you're
able
to
see
things
clearly
and
that's
the
that's
the
thing
that
you
won't
get
as
part
of
your
upstream
experience.
Like
you
know,
you
just
have
gambles.
You
need
now
another
way
to
visualize
your
policy
or
your
overall
hierarchy
of
rules,
and
those
are
a
few
things
which
are
kind
of
like
challenges.
It's
not
something
to
say
that
it
it's
not
something
that
we
cannot
solve.
F
It's
just
that,
or
at
least
you
know,
you
need
a
dashboard
to
figure
out
the
whole
hierarchy
of
your
security
rules
and
how
the
flow
happens.
But
you
know
those
are
the
things
that
you,
those
are
the
pros
and
cons
that
we
have
to
make.
F
E
Versus
bucket
options
right,
so,
if
I
understood
you
correctly,
you
are
saying
that
how
this
particular
use
case
affects
between
these
two
options,
or
are
you
saying
that?
Can
we
have
the
options
finalized
first
and
then
come
back
to
this.
H
Yeah,
actually
you
know
you
know,
maybe
I
initially
I
thought,
like
you
know,
the
first
point
was
I
was
trying
to
go
after
first
point.
If
it
is
common
for
both
the
priority
versus
bucketization,
then
you
know
fine,
you
know
we
can
discuss.
H
If
once
we
choose
one
versus
the
other,
you
know
if
the
other
diminishes,
then
you
know,
then
the
discussion
is
mute
right.
So
so
now
going
back
to
the
priority
ordering
right,
I
still
find
it
pretty
hard.
How
do
we
do
prior
to
our
training?
You
know
if
you
have
a
pod
selectors,
you
know
you
know
you
can
select
in
a
number
of
ways.
H
You
know.
Let's
say
you
take
an
example
of,
like
you
know,
I'm
trying
to
select
a
pod
with
the
application
equal
to
hr,
app
and
another
part
selected.
This
application
equal
to
hr-
and
you
know
you
know,
site
equal
to
sunnyvale
and
you
know,
or
you
can
have
you
know
a
few
other.
You
know
ways
of
doing
the
part
selections
right.
How
how
do
you
put
the
ordering
among
these
network
policies?
That's
an
asset
yeah.
So
you
have
no
clue
how
you
know
this
will
will
get
formed
right.
H
So
that's
where,
like
you
know,
one
approach,
we
know
we
you
know
we
did
was
you
can
do
only
one-way
selection,
which
is
application
weight.
You
can
select
only
the
application.
That's
the
only
anchor
point.
Then
inside
the
application.
You
have
a
ordered
set
of
policies
inside
each
policy.
You
have
order
set
of
rules.
H
F
That's
that's
the
reason
why
our
legacy
way
of
writing
things
do
not
really
map
directly
well,
or
I
mean
they
can
it's
just
that
we
need
to
be.
You
know
we
need
to
adapt
to
the
community's
way
of
things,
and
since
the
label
sector
is
the
main
way
of
of
writing
this
grouping,
it
becomes
harder
to
use
the
priority
ordering,
but
we
are
way
over
time.
I
have
another
yeah.
A
Sorry,
sorry,
I
was
letting
it
run
because
that
was
some.
I
think
that
was
some
really
good
conversation,
so
you
know
to
get
to
a
conclusion
on
this,
though
prasada
nadeem,
I
think
it's
good
that
y'all
are
kind
of
on
the
other
side,
with
sanjeev
a
little
bit
on
this
priority.
Ordering
thing:
do
you
think
you
know
finishing
this
powerpoint
truly
so
making
sure
that
every
sample
yammel
we
have
is,
is
in
line
with
the
diagrams
we've
written?
Do
you
think
that
would
help
us?
You
know
take
that
next
step.
A
Okay,
so
that's
gonna
be
my
goal
for
this
next
week,
trying
to
foster
that
moving
along,
because
I
would
like
to
finish
this
up
like,
like
you
said:
we've
been
talking
about
this
for
a
long
time
and
some
of
those
problems
are
going
to
come
out
as
we
write
these
animals
and
look
at
them
because,
right
now
the
yaml
samples
are
not
correct.
We're
correct
up
to
the
diagram,
but
these
are
wrong
in
some
places.
D
So
the
other
thing
that
I
want
to
suggest
you
andrew,
is
maybe
we
should
add
or
change
them,
this
meeting
to
be
something
like
okay.
This
is
going
to
be
now
just
network
policy,
v2
or
just
cluster
network
policy,
and
we
are
going
to
move
forward
and
faster
to
these
stuffs
to
start
having
proper
fun
like
so.
A
A
Yeah,
but
the
goal
is
to
get
this
powerpoint
done
and
then
bring
it
to
cignet,
and
you
know
really
try
to
get
it
nailed
down,
because
I
think
now
we
almost
have
enough
description
like
someone
can
not
know
anything
about
the
last
year
of
discussion
and
look
at
this
powerpoint
and
say:
okay,
this
makes
sense,
or
this
does
not
make
sense
right
to
each
yaml
and
each
use
case,
yeah
cool,
all
right.
Okay,
thank
you.