►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
C
We
have
found
both
in
the
config
and
the
user
experience
working
group,
but
there
are
a
lot
of
people
thinking
about
validation.
It's
a
huge
pain
point
in
this.
Do
today
to
not
know
whether
your
objects
are
just
being
swallowed
and
essentially
piped
to
Devon
all
because
they're
invalid
or
if
you've
actually
got
valid
configuration
and
something
else
may
be
going
wrong
or
it's
just
healthy.
C
So
there
are
a
lot
of
us
thinking
about
it,
and
there
are
a
lot
of
us
thinking
about
it
in
different
directions.
We've
heard
from
Edie
snivel
from
IBM,
who
has
put
together
with
an
analyzer
inspect,
he
did
inspect
it.
It's
do
cuddle
inspect,
which
does
some
interesting
and
that's
as
I
understand
it
going
out
with
one
three.
We
have
a
steal
couple.
Analyze
is
also
going
out
with
one
three
which
came
from
odds
who
couldn't
be
here
today.
So
we're
gonna
be
representing
him.
Jiali
also
has
some
awesome
validation
features.
C
Niraj
you've
worked
on
the
the
ISTE
ovett
command,
with
some
great
validation
rules
there.
So
there's
clearly
a
lot
of
energy
being
put
towards
validation,
a
lot
of
thought.
It's
a
very
big
user
pain
point.
What
we'd
like
to
try
to
achieve
is
that
in
the
one
for
time
frame,
this
T
of
has
a
comparative
ala.
Dacian
story,
the
the
bad
news
behind
all
of
us
working
on
validation.
C
The
worst
case
scenario
is
that
we
each
produce
our
own
unique
tool
with
its
own
unique
rules,
looking
at
its
own
unique
objects,
and
then
we,
you
can
say
well,
my
cluster
is
it's:
do
vet
valid,
but
it's
not
as
feel
analyze,
valid
I.
Don't
think!
That's
a
use
case
that
any
of
us
wants
them.
So
we've
talked
about
this
a
little
in
the
config
and
user
experience
working
groups,
and
is
that
the
doctor?
That
is
a
thank
you
I,
can't
talk
and
type.
At
the
same
time,
I
can't
talk
and
it's
time
yeah.
C
So
the
purpose
of
this
meeting
is
to
start
getting
everyone
on
the
same
page,
and
so
to
that
end,
we've
heard
from
a
lot
from
IBM
and
Ed's
nibble
a
lot
from
Google
and
the
work
that
we've
done.
A
multi
object,
validation
here
are
there
and
we've
put
together
a
document
in
case
any
of
you
guys
haven't
seen
this
sort
of
walked
through
the
the
four
or
five
tools
that
were
aware
of
what
I
wanted
to
start
with
is
asking?
Is
there
anything
we
missed?
Are
there
tools
we've
missed?
G
G
Keeper,
an
OPA,
open
policy
agent
gate,
keep
projects
out
of
open
policy
agent,
but
it's
effectively
a
controller
that
sits
in
the
control
plane
and
lets
you.
You
know
kind
of
create
constraints
and
constraint
templates.
The
constraint
actually
includes
the
OPA
language,
rego
reg,
oh
I,
don't
know
it's
called
and
there's
actually
in
the
OPA
library
on
github.
There
is
actually
a
handful
of
like.
If
you
audit
policies
you
can
cut
it,
so
they
do
something
they
do.
G
A
look
I
think
some
similar
overlap
and
I
have
opinions
on
how
that
could
be
better
and
the
challenges
there
are,
but
I'm
working
through
it
right
now
to
see
if
I
can
get
that
up
and
running
to
at
least
like
do
something
simple
like
reject
my
service,
or
at
least
warn
me.
If
I
don't
have
the
correct
naming
structure
or
then.
J
G
C
And
use
cases
are
definitely
something
we
want
to
talk
about
in
this
meeting
in
terms
of
how
we
want
this
event,
the
validation
tools
to
be
accessed.
Certainly,
we
have
some
use
cases
that
are
already
established
like
Kealia
that
we
would
want
to
continue
providing
for,
if
they're
interested
in
using
what
we
build.
Obviously
separate
projects,
separate
governance,
but
but
yeah
once
we're
done
with
the
tool
section.
I
would
like
to
circle
back
to
that
and
talk
about
the
different
ways.
This
tool
could
be
used.
C
My
next
big
question,
for
everyone
is,
is
what
our
use
cases
are,
and
so
I'm
gonna
start
by
throwing
out
one
that
Chris
Wilson
has
championed
here
at
Google,
and
that
is
we
would
like
for
users
to
be
able
to
run
whatever
validator.
It
is
whether
it's
it's
gotta
have
some
kind
of
a
command-line
interface
I.
C
Guess
to
do
this,
we'd
like
to
be
able
to
run
it
against
the
amyl
files,
not
against
a
coop,
a
live
kubernetes
cluster
or
a
Live
service
mesh,
but
static
configuration
files
or
in
the
Amalur
JSON
format,
so
that,
if
you're
in
like
a
get
ops
model
or
something
like
that,
you
can
get
validation
information
before
deploying
into
your
production
cluster.
So
that's
one
use
case,
but
I'd
like
to
hear
from
you
guys
about
others
that
you're
interested
in
so.
J
For
us,
the
way
we
even
do
that
is
in
two
modes,
one
else
you
can
run
your
CLI
is
to
your
head
or
the
container
inside
your
cluster
and
we'll
look
at
all
the
resources
that
are
currently
running
and
then
print
out
a
bunch
of
stuff,
so
I
would
take
the
other
side
of
this,
which
is
I,
should
be
able
to
inspect
running
cluster
and
I
can
tell
it
at
any
point.
What
are
the
failures?
I
guess
right
now,
three
yeah.
E
You
are
going
to
start
to
see
a
lot
of
red
signals.
Another
thing
that
we
want
to
warn
is
that
this
is
just
a
warning,
something
that
a
we
don't
know
that
parties
just
have
this.
Both
the
useful
or
favor
we'd
agree
with
another,
more
analyzed
situation
right.
Another
thing
is
that
we
don't
know
when
we
don't
have
information,
for
example,
if
the
user
are
referring
from
one
namespace
to
another
name,
a
space
that
the
user
has
not
accessed.
For
example,
Keala
can
detect
a
I,
cannot
access
to
that
name
is
pay
for
some
reason.
E
A
poly
is
that
that
you
want
to
inspect.
I
am
all
its
cover
all
these
cases
that
we
do
so
for
that
we
need
a
status
configuration,
which
is
what
today,
we
are
doing,
is
reading
the
resources
and
also
combine
that,
with
all
around
our
information
that
we
can
color
from
the
coordinates
API,
and
this
is
basically
what
we
would
like
to
not
be
part
of
key
early
backend
as
it
is,
but
something
that
it
can
be
done.
E
Sorry,
it
can
be
done
in
like
a
service
that
we
can
consume
by
key
ally,
but
a
service
is
something
that
we
we
seen.
We
have
started
proposing
this
at
the
beginning
of
the
project
with
delay,
but,
okay,
that
it
was
focused
more
in
his
scheme
validations.
She
should
be
very
fast.
We
seen
that
like
at
something
in
the
east
heel,
pain
and
a
stir,
control
plane
could
make
more
sense
for
us
to
be.
J
The
last
point
that
Lukas
weight
is
really
important
for
us
also,
it
should
be
easy
to
consume
like
an
event
based
or
one
place,
to
look
at
that.
This
is
happening,
it's
little
difficult
to
create
like
a
dashboard
or
some
other
reporting
tool.
If
you
have
to
go
and
inspect
every
other
configuration.
J
I'm
proposing
the
opposite,
the
output,
so
suppose
you
figure
out
when
you
did
multi
resource
validation,
that
something
is
wrong
now.
I
want
to
be
aware
of
all
the
things
that
are
wrong.
How
do
I
do
that?
Yeah
do
I
go
to
all
the
resources,
all
the
resources
in
the
cluster,
which
can
be
all
the
kubernetes
resources,
all
the
sto
resources
and
then
look
at
status
field
right.
D
J
K
C
Be
a
good
idea.
Yeah
then
other
places
we'd
like
to
put
error
information,
but
not
the
only
place
one
place
is
the
status
field
in
communities,
yes
from
there.
I'd
also
like
to
call
out
that
whatever
we
build,
this
is
a
little
bit
moving
beyond
use
case,
a
little
that
we
want
to
think
bigger
than
kubernetes.
One
of
the
things
that
we're
hearing
about
that
from
the
the
technical
Oversight
Committee
is
that
over
the
next
year,
we're
going
to
begin
supporting
other
platforms,
no
word
yet
on
what
those
platforms
are.
G
We
can
make
some
assumptions
about
component
availability
like
even
if
we
assume
no
kubernetes
Galli
is
still
a
component
that
exists
somewhere.
It's
so
in
theory,
it's
still
accessible.
We're
like
ISTE
fctl
still
exists.
It's
just
you
know
there
might
be
sub
commands
our
operations
that
are
different.
That's.
C
E
D
So
one
I
just
added
a
line
here:
I
just
wanted
to
call
out
it
something
else.
Yeah
to
me
sounds
like
a
really
useful
case.
I
just
wanted
to
sanity
check.
This
is
being
able
to
kind
of
combine
those
those
file-based
and
live
cluster
based
cases,
so
you
can
essentially
simulate
the
impact
of
applying
a
set
of
files
to
your
currently
run
cluster.
Oh
that's.
J
G
J
It's
like
the
openshift.
Can
I
I
guess
something
like
that:
yeah
cool,
just
one
refinement
on
this
I
think
we
are
on
agreement
but
I'm
assuming
we
are
not
going
to
constraint
ourselves
to
just
do
multi
resource
validation
of
st
resources,
but
it
will
also
do
how
they
interact
with
kubernetes
resources.
Is
that
correct.
D
That
has
been
my
understanding
is
the
resources
that
are
not
strictly
is
your
resources,
but
still
important
to
is.
Do
we
still
want
to
write
that
down
I?
Think
that
makes
a
lot
of
sense.
I
think
we
have
some
open
questions
like?
Can
we
attach
a
status
field
to
my
kubernetes
service,
I,
don't
know
if
that's
possible
or
not
yes,.
C
C
J
J
E
And
do
you
thing
also
to
have
some
I
guess
that
the
main
remain
is
there?
We
saw
the
validation
into
the
status
of
the
object,
the
subject
of
object,
that
is
invalidated,
but
the
we
evaluate
if
we
can
create
some
kind
of
events
to
you
Nate,
a
timeline
of
you
know
what
happens
when
I
delete
a
resource,
or
you
know
if
I
use
several
change
on
the
same
resource
and
I
would
like
to
have.
J
C
What
we've
talked
about
so
far
is
all
inspecting
the
config
objects,
essentially
at
the
kubernetes
or
galleys
level
of
abstraction.
What
ed
did
is
he
actually
goes
to
each
individual
envoy,
proxy
dumps,
config
and
the
value
exit.
In
light
of
what
pilot
says,
the
object
should
have
essentially
running
a
div,
so
his
covers
validation.
J
For
me,
they
are
because
I
think
when
you
go
to
event
just
the
runtime
eventing,
you
can
have
runtime
eventing
from
multiple
components.
Also,
so
you
can
have
runtime
eventing
from
pilots
saying
I'm
having
failures
distributing
configuration,
you
can
also
have
runtime
eventing
from
a
citadel
and
even
sidecars,
which
is
extreme,
but
we
can,
but
the
ISTE
of
CTL,
inspect
I
think
should
restrict
to
one
thing
which
is
given
a
pod.
Tell
me
what
configuration
applies
to
me.
I
think
that's
what
ed
was
going
for?
G
This
this
brings
up
interesting
points.
I
think
this
is
part
of
a.
We
didn't
really
cover
here
so
far,
but
I
think
you're
getting
to
a
point
where
saying
like
we
need
a.
We
need
not
just
an
object,
centric
kind
of
validation,
but
we
need
a
look
at
logical
validation,
which
is
so
so
take
a
collection
of
rules
or
objects.
J
K
K
K
I
K
C
D
Not
ideal
right
so
that
segues
a
bit
one
of
the
things
that
I
think
is
probably
an
important
use
case
here.
Is
we
try
to
make
it
obvious
to
users
what
the
right
tool
to
use
is,
or
rather
we
don't
want
to
confuse
users
by
having
a
whole
bunch
of
different
tools
that
do
different
things,
but
it's
not
obvious
what
they
are
I
we
don't
I
could
be
having
if
you're
Connell
analyze
and
an
SEO
Cole
inspect
and
an
sto
cuddle
validate
which,
by
the
way,
all
basically
exist
in
some
form.
D
D
H
J
Was
just
make
it
UX
I'm.
Sorry
I
then
write
us
before
in
requirements
and
they
come
out
differently.
Thank
you,
though.
The
other
thing
I
was
going
to
say
is
I.
It
feels
like
there
are
like
five
or
six
tiers
of
validation,
right
and
I.
Think
ours
and
I
talked
about
it.
Long
back,
one
is
only
a
single
resource
validation
which
which,
which
happens
as
part
of
admission,
webhooks
yeah.
J
D
J
So
I
think
we
need
to
come
up
with
the
unified
naming
scheme
for
what
we
are
going
to
call
this
particular
thing,
which
is
your
single
resources
valid.
But
when
you
combine
it
with
multiple
things,
it
kind
of
adds
weirdly.
So
I,
don't
if
you
add
that
invalidate
it
might
be
confusing,
but
the
same
thing
while
it
feels
like
the
right
name.
You
see
my
point.
D
Yeah,
it
does
seem
to
me,
you
know
getting
a
little
bit
away
from
our
our
current
goal,
just
trying
to
find
reduce
cases,
but
it
doesn't
mean
like
we.
We
can
probably
fold
that
functionality
in
and
just
make
it
clear
that
this
is
kind
of
a
different
level
of
things
is
something
that
is
technically
valid.
But
maybe
here
are
some
problems
we
think
you
will
have.
If
you
try
to
apply
it,
we.
C
Think
of
the
cute
cuddle
model,
where
they've
got
kind
of
verb
object.
What
we're
verb
class
objects
get
pon
foo.
We
might
be
able
to
have
like
validate
virtual
service
X
or
validate
all
mm-hmm.
It
just
goes
and
gets
everything,
and
then
you
can
provide
a
F
to
a
folder
for
a
turian,
yeah,
mol
or
JSON
files,
or
it
can
read
your
cute
cuddle
and
connect
and
get
config
that
direction.
G
Your
CTL
has
as
sub
commands
for
a
lot
of
the
or
sub
sub
commands,
I
guess
for
a
lot
of
the
sub
command,
so
I
think
it's
D
of
CTL
validate
objects.
Let's
you
do
the
single
for
API
conformance
and
then
validate
I,
don't
know
something
larger,
broader.
Whatever
the
case
may
be,
you
feel
like,
like
the
plan
right
like
she
validates
the
entire
apply
approach
of
lucky
impact.
That's
gonna
have
and
gives
you
those
morning's
again,
especially
for
the
purposes
of
like
CD,
tooling
or
get-ups.
Tooling
would
be
nice
to
have
that
you.
E
B
C
D
What's
here
that
we
haven't
talked
about
here
yet
so
one
of
the
things
that
Oz
pointed
out
I
think
is
really
valuable.
Is
the
idea
that
we
can
have
validation
errors
with
searchable
error
codes
so,
rather
than
just
relying
on
a
particular
string
message
that
describes
it,
we
actually
have
a
concrete
code
for
every
kind
of
validation
message
that
we
want
to
surface
and
also
making
it
clear.
What's
you
know
can
give
them
levels
to
rights?
You
can
have
errors
separate
from
warnings
from
whatever
yeah
economy.
A
D
Yes,
so
that's
another
thing
that
what
I
really
love
about
the
the
Keala
implementation
in
particular,
is
just
being
able
to
see.
You
know
the
ammo
file
in
your
line
highlighted,
is
really
helpful
and
I
think
it
is
really
valuable
to
be
able
to
tie
a
validation
error
back
to
the
a
source
where
it
came
from.
E
Yeah
or
Vall
would
be
from
Keala
perspective
to
try
to
to
consume
these
rules
in
in
some
Easter
component.
When
you
know
with
the
rules,
can
be
something
reliable
and
me
remark
and
contribute,
and
you
know,
hanky
alley
doesn't
need
to
upgrade
just
to
get
more
rubies.
Probably
around
run
time
example,
we
can
add
more
rules
in
you
know
something
fashion
that
we
can
add
more
configuration
or
suddenly
that
so
the
ideas
to
have
some
kind
of
metadata.
How
to
describe
this
rule?
E
D
J
J
G
G
D
I'm.
Writing
it
down
right
now,
so
I
love
to
whatever
you
come
up
with
I
want
to
try
to
maintain
as
much
as
possible
the
validation
logic.
That's
already
proven
it's
worth,
and
all
these
various
tools
so
to
the
extent
we
can
reuse,
if
not
the
actual
code,
at
least
the
the
logic
I
think
that's
super
valuable.
Yes,
and
then
we
already
kicked
out
yeah
reserved.
K
D
E
C
Minutes
I
planned
30,
so
I
will.
Is
this
a
good
time
for,
like
next
week,
neurons
I
think
you're
out
for
a
while
I'm
out
of
next
week?
Yes,
okay,
I
mean
I'll,
continue.
Writing
in
this
document
some
architectural
thoughts
based
on
the
requirements
that
we
have
listed
there
and
will
kind
of
follow
up
online
and
plan
for
another
video
conference
in
two
weeks.
Does
that
sound
good?
Everyone
sounds
right.
Alright,.