►
From YouTube: Kubernetes SIG Apps 20190722
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
a
couple
of
discussion
topics
and
we
may
not
be
doing
the
bug
scrub
today,
depending
on
how
github
continues
to
function.
A
Yeah,
that's
not
good,
so
this
might
be
a
short
meeting
unless
anyone
else
has
other
topics:
yeah
unicorn
of
death,
okay,
so
one
thing
to
be
aware
of
is
there's
some
discussion
in
cig
architecture
and
also
in
the
steering
committee
about
using
the
K.
It's
that
Iowa
kubernetes
that
IO
domains.
So
typically
we
use
those
domains
for
the
core
api's.
A
So
what
we're
adding
is
X
K
style
so
for
our
sig,
any
six
pon
surd
projects
would
take
that
domain.
So
in
the
future,
for
instance,
ab
c
rd
will
move
to
xk.
It's
that
I/o
legacy
stuff
can
be
grandfathered
in
and,
of
course,
for
the
other
stuff,
just
don't
use
it
unless
it's
a
core
type
or
a
built
in
type
where
you've
gone
to
Sagarika
texture
and
develop
the
sierra
d
outside
of
the
core
that
will
be
included
as
a
built
in
type
in
kubernetes.
A
So
that's
really
the
gist
of
it.
There's
some
other
kind
of
conversation
around
when
we
need
to
get
API
reviews
for
non
built-in
types
and
what
is
the
function
of
Luz
API
reviews,
for
instance,
should
if
sig
apps
host
a
project-
and
we
come
up
with
the
CR
D,
and
we
want
to
include
it
and
host
it
as
part
of
our
shake
and
we
get
an
API
reviewer
to
review
it.
Do
they
have
so
should
there
should
their
review
purely
be
focused
on
just
the
API
correctness?
A
Should
their
review
also
include
technical
correctness
of
the
implementation,
and
should
it
potentially
be
at
the
level
of
hey?
We
should
not
do
this,
and
the
kind
of
thinking
is
the
API
reviewer
should
focus
primarily
on
the
API
mechanics
and
the
API
correctness
kind
of
the
syntax,
but
not
on
the
latter.
Two
categories-
yeah
so
Mike
I-
did
try
to
review
this
morning,
but
as
it
stands,
github
as
far
as
I
know
is
still
down.
A
A
A
A
A
As
far
as
I
know,
the
CF
cigs
are
primarily
advisory
bodies.
They
don't
have
governance
over
the
core
kubernetes
api
is
they're,
not
necessarily
reviewers
of
those
api's.
They
don't
own
code,
in
particular
that's
kind
of
the
path
that's
been
for
cig
storage.
So
far,
so
they
do
things
like
technology
evaluations,
suggesting
Breck,
best
practices
providing
tutorials
Brian.
Have
you
had
any
experience
with
the
direction
that
say,
gaps
is
taking
so
far
in
CN,
CF
still.
B
It's
still
still
getting
ticked
off,
I
think
they're
still
trying
to
get
it
off
the
ground,
but
I
do
want
to
read
this
one
quick
thing
and
I
think
this
might
explain.
This
is
from
their
their
charter
working
document.
It
says
the
collaborating
areas
related
to
developing,
deploying
and
managing
native
applications,
and
then
it
said
to
develop
informational
resources
like
diets
at
Orioles
and
white
papers,
to
give
the
community
and
understanding
the
best
practices
trade-off.
Some
value
adds,
as
it
relates
to
developing,
deploying
and
managing
applications
and
cloud
native
environments
and
the.
D
B
Item
is
so
identify
suitable
projects
and
gaps
in
the
landscape,
periodically
updating
them
TOC
with
suggested
actions
an
instruction
manner.
So
what
I
take
is
this
is
that
could
burn
any
cig.
Abbas's
is,
is
a
little
bit
lower
level
and
in
various
focused
on
kubernetes,
but
there's
lots
of
applications
in
this
space,
whether
kubernetes
related
or
not.
That
definitely
need
a
forum.
So
I
like
the
idea,
do.
A
A
One
thing:
that's
an
opportunity
is:
if
you
get
involved
now,
you
have
a
lot
of
opportunity
to
kind
of
shape
what
it
becomes
right,
because
it
is
still
a
little
bit
nebulous,
but
it
it's
probably
not
going
to
be
conflicting
with
the
work
we
do
in
Sega
apps
like
I,
don't
anticipate
them
doing,
bug,
scrubs
or
looking
at
enhancement,
requests
or
bad
stuff
for
core
kubernetes.
The
the
kind
of
thing
about
seeing
CF
for
things
for
people
who
don't
who
aren't
familiar
with
it,
though,
is
CN.
A
Cf
is
a
little
bit
more
than
just
kubernetes
there's
an
entire
ecosystem
of
projects,
both
in
the
incubator
and
a
few
that
are
graduated
that
fall
under
the
governance
of
CN
CF.
So
it
does
have
the
potential
to
kind
of
widen
the
scope
of
exposure
across
a
plethora
of
technologies,
whereas
cig
apps
is
really
focused
on
kind
of
kubernetes
and
the
things
that
run
directly
on
top
of
it,
for
instance,
I
believe
rabbitmq
is
part
of
CN
CF.
A
A
rook
is
part
of
CN
CF,
linker
D
ifs
do
well
actually
I
think
this
deal
might
not
be
time
to
think
of
it,
but
a
whole
plethora
of
storage
and
application
technologies
fall
under
this
space.
So
you
know
it
could
be
an
exciting
opportunity
to
get
involved
in
kind
of
a
broader
range
of
technologies.
If
core
kubernetes
isn't
necessarily
the
focus
that
you
want
to
spend
all
of
your
timeline,
it.
B
Might
give
you
another
suggestion
of
where
there
could
be
some
overlap?
Remember
last
year,
when
we
did,
the
tags
went
through
the
whole
exercise,
with
the
labels,
the
standard
labels
and
that's
Brina
put
together.
That
is
something
interesting,
because
someone
else
reached
out
to
me
saying:
hey
I,
wish
that,
in
our
whole
CNC
up
space
are
a
whole
cloud
namespace
that
we
could
just
agree
on
labels,
so
that
that
is
a
place
where
there
could
be
some
coordination
actually
yeah.
B
D
A
Reporting
major
outages
for
everything,
except
for
github
pages
and
notifications,
so
bug
scrub
is
probably
not
going
to
happen
today.
I
wish
the
github
folks
best.
Hopefully
they
recover,
because
if
not
I
mean
they're
on
kubernetes
and
we
use
them
to
build
kubernetes
dependency
cycle,
they
don't
get
healthy.
We
can't
ship
in
bug
fixes.
C
A
A
C
So
I
mean
I
think
just
to
summarize
I
think
I
lay
down
all
the
options
and
the
option
that
I
wanted
to
go
was
basically
go
with
pod
management.
If,
if
somebody
does
max
on
an
unavailable,
pod
management
is
equal
to
parallel,
then
it's
basically
say
that
there
is
no
ordering
guarantees
and
and
will
always
try
to
do
max
and
unavailable
number
of
parts
showing
down
even.
A
Better
yeah
go
ahead.
Sorry.
What
what
I
have
my
suggestion
was
that
if
pod
management
policy
was
ordered,
then
you
go
with
what
you
described
is
option
one
and
try
your
best
to
preserve
the
ordering
potentially
at
a
lower
kind
of
rate
of
disruption
and
for
if
you've,
specially
specified,
parallel
and
you've
specified
maximum
available
to
be
greater
than
one.
A
Then
that's
when
you
would
get
the
behavior
that
I'm
going
to
tolerate
a
larger
number
of
disruptions
and
potentially
go
out
of
order
because
I
don't
care
about
it
and
then,
if
you
specify
pod
management
policy
one
with
enjoy,
if
you
specify
Maxon
available
one
with
pod
management
policy,
equal
parallel,
you
would
get
the
default
behavior.
You
have
today
preserving
the
existing
semantics
right.