►
From YouTube: Kubernetes Community Meeting 20180524
Description
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
B
B
B
C
All
right
so
I'll
be
talking
today
about
Argo,
which
is
a
cargo,
is
a
general-purpose
container
based
workflow
engine,
specifically
for
kubernetes,
so
I
like
to
think
of
our
NGO
as
basically
a
fancy
job
controller.
It's
implemented
as
a
kubernetes
controller
in
a
workflow
CRD,
where
every
workflow
step
is
a
pod.
C
Argo
can
be
used.
It
was
implemented
with
the
CI
CD
use
case
in
mind,
but
it's
been
actually
proven
to
be
more
flexible
for
things
like
data
processing
and
ml
pipeline
training
workflows,
it's
one
of
several
projects
that
we
have
in
there.
There
are
go
prod
umbrella,
so
today
we'll
just
be
talking
about
the
workflow,
but
our
go.
Workflow
is
kind
of
like
a
building
block
to
other
higher-level
applications
like
our
go.
Ci,
Fargo,
CD
and
in
the
inventing
some
of
our
users
and
contributors
are
are
in
kind
of
the
data
processing
space.
C
So
this
is
just
kind
of
a
list
of
features
that
are
go,
supports.
I
won't
go
through
everything,
but
just
highlight
a
few.
We
have
a
built-in
support
for
artifact
management,
so
you
can
actually
load
and
save
artifacts
out
of
a
user's
container
without
actually
touching,
like
you
know,
their
entry
point.
C
Or
anything
we
have
parameterizations
of
workflows,
you
can
loop
and
can
have
conditionals
inside
your
workflow.
You
can
retry
at
the
steps
level
or
at
the
workflow
level,
and
because
we
are
a
container
base,
we
can
leverage
everything
that
the
kubernetes
has
with
respect
to
pods.
So
things
like
volumes
scheduling
if,
like
pod
affinity,
toleration,
node
selected
time
outs
through
octave
deadlines
seconds,
we
kind
of
we
get
all
of
those
kind
of
features,
basically
for
free.
C
So
little
bit
about
a
high-level
architecture,
how
its
implemented
so
design
is
fairly
simple.
You
have
a
workflow
controller,
that's
running
as
a
deployment.
You
have
a
UI
which
is
optional
and
a
CLI,
and
all
of
these
services
are
interacting
basically
with
the
kubernetes
api
server.
No
one's
really
listening
on
any
ports.
Aside
from
our
go
UI
they're,
all
just
interfacing,
with
the
kubernetes
api
server,
when
the
workflow
controller
schedules
pods,
it
will
inject
a
an
it
container
and
a
sidecar
wait
container
next
to
the
user's
main
container
and
the
purpose
for
these.
C
And
and
then
I'll
just
go
go
to
the
UI
and
basically
that's
it.
So
you
have
you
have
a
single
container
workflow
and
from
here
you
can
look
at
the
logs
and
this
one
has
no
artifacts,
but
if
it
had
artifacts
you
would
be
able
to
download
them
from
the
UI,
and
this
is
just
kind
of
going
over
the
IMO
definition.
So,
as
I
mentioned,
there's
two
ways
to
define
we're
close.
C
C
So
the
the
dag
workload
that
I
just
submitted
is
a
classic
just
kind
of
diamond
pattern
where
you
have
B
and
C
are
dependent
on
a
D
is
dependent
on
B
and
C,
and
the
workflow
controller
will
kind
of
well
well
execute
it
in
the
dependent
order
to
get
to
the
final
result.
With
the
step
space
when
oops
shoot.
C
Can
you
see
my
screen
I
accidentally
clicked
on
stop
video,
okay,
all
right
step?
Six,
my
knees
have
topped
down
like
you
can.
Each
double
dash
indicates
as
sequence
of
parallel
steps
over
here.
We
have
a
first
step
with
executing
container
and
then
the
second
step
group
is
actually
two
containers
in
parallel,
and
that
looks
like.
C
Like
this
okay,
so
so
these
are
the
simple
examples
you
can
actually
execute
templates
from
within
templates.
So
so
in
this
example,
I
have
a
that
same
diamond
template,
but
each
step
in
that
diamond
template
is
actually
invoking.
Another
template
called
coin
flip
coin
flip
is
a
template
which
basically
as
flips
a
coin
and
has
a
zero
or
heads
or
tail
value,
and
this
depending
on
the
value
of
that
result,
it
will
execute
your
branch
between
two
different
steps
of
that
workflow.
So.
C
C
C
C
B
C
C
So
a
retried
both
that
those
same
two
containers
this
time,
one
more
failed,
and
so
you
can
kind
of
is
this-
is
illustrating
the
fact
that
you
can
actually
retry
where
the
workflow
left
off
and
just
erase
all
history
the
fact
that
it
failed.
So
you
can
only
get
that
last
retry
that
last
failing
piece.
A
C
C
C
C
B
B
B
B
We
have
AI
doc,
at
least
having
a
placeholder
PR
for
any
Docs
for
new
features
is
dim.
So
if
you
are
working
on
a
feature
which
is
targeted
at
111,
that
is
on
that
feature,
tracking
spreadsheet
or
otherwise.
You
need
to
open
a
placeholder
PR
against
Docs,
which
is
where
your
eventual
completed
documentation
will
go.
If
you
don't,
then
the
docs
team
will
come
after
you
and
pester
you.
So
please
go
ahead
and
do
that
and
then
coming
after
that.
B
Next
Tuesday
is
the
beginning
of
code
slush
I,
the
which
means
that
at
that
point
we
will
be
asking
people
to
focus
on
getting
everything,
cleaned
up,
ready
for
feature
integration
for
111
the
that
is
the
beginning
of
the
period
where,
for
three
weeks
we
kind
of
asked
people
to
try
to
focus
on
getting
the
current
release
out
and
maybe
not
working
on
stuff
for
future
releases
for
a
brief
period,
the
so
for
code.
Slash
at
that
time.
B
B
What
are
you
there?
Sir
minor
things?
Is
that
one
of
the
other
things
that
we
begin
with
code
selection
code,
freeze,
I,
want
to
call
burndown
meetings?
Are
the
idea
of
burned-down
meetings
is
for
the
release
team
and
the
release
leads
of
any
SIG's
or
authors
of
any
features
to
meet
up
on
a
first
three
times
a
week
and
then,
in
the
last
week
on
a
daily
basis
to
make
sure
that
everything
is
being
merged
in
and
everything
is
working
correctly
in
the
past.
Those
have
been
held
at
10
a.m.
B
B
B
E
Ok,
yes,
excellent
cool,
so
for
those
of
you
are
at
Keuka,
and
this
will
basically
just
be
a
repeat
of
what
I
said
set
up
there,
so
very
quick
in
SIG's,
Service
Catalog,
as
of
October
2017,
we've
been
in
beta
China
moves
closely
or
quickly,
as
we
can
towards
1.0.
To
that
end,
we
are
trying
to
finalize
our
list
of
version
1.0
work
items.
You
can
see
the
list
there,
a
space
brokers
asynchronous
bindings.
We
are
looking
at
using
CR
DS
instead
of
our
own
standalone,
API
server
and
not
using
aggregation.
E
It's
it's
actually
very
cloud
Fabrice
if
you're
a
founder
with
healthy,
how
they
deal
with
services
and
bindings
and
hooking
up
your
apps
and
services.
So
that
may
look
very
similar
similar
to
you
guys
who,
from
Amica
factory
as
I
mentioned,
were
looking
at
names
based
brokers.
They
are
in
there
or
inside
the
PRS
there,
but
it's
not
necessary
merged
yet,
and
it
said
we're
looking
at
whether
we
can
you
see
our
DS
instead
of
Romans
API
server.
E
E
In
fact,
I
think,
based
on
the
shout
outs,
a
big
kudos
to
Carolyn
as
well
as
since
we
look
at
well
I.
Think
I
mentioned
the
shout
outs
yeah
we
get
a
lot
of
people
joining
the
group
and
I
say
it's
great
to
hear
people
thanking
all
the
people
there
who
helped
bring
them
onboard
and
educate
them
and
show
the
patients
you
know
cuz.
Sometimes
newcomers
don't
feel
as
welcome
as
they
should
be
in
luckily
they're
getting
lots
of
healthy
people
within
the
group.
B
A
A
So
we
had
over
40
people
in
attendance,
it's
a
pretty
big
sig
and
we
usually
have
between
40
and
50
people.
It's
representing
about
19
plus
companies,
VMware
provided
the
facilities
for
us
and
lunch
and
Google
provided
dinner.
So
I
want
to
give
a
shout
out
to
those
two
companies.
Our
agenda
was
primarily
technical.
A
We
start
out
with
CSI,
which
is
a
big
driver
in
the
storage
thing,
we're
trying
to
get
most
of
our
function,
that's
in
territory
and
CSI
as
the
mechanism
for
us
to
do
that.
Are
we
I
we
okay,
so
CSI
discussion,
we
didn't
go
through
too
much
new
stuff.
It
was
mostly
current
state
in
the
art
and
what
our
plans
were
for
111.
We
have
block
storage,
we
have
a
driver
register,
they
were
trying
to
merge
or
trying
to
stabilize
some
of
the
QC
si
shim
layers
and
then
there's
a
couple.
Other
features.
A
A
Okay,
so
snapshots,
this
is
our
other
eye.
This
is
another
thing
that
we've
had
in
tree
for
some
time
and
we're
moving
this
on
a
tree
and
we've
decided
to
quit
adding
features
to
the
entry
function
and
instead,
just
have
it
just
adds
new
stuff
to
CSI,
and
you
know
achieve
the
same
feature.
Parity.
Do
we
have
NCIS
in
entry,
so
there's
two
new
methods
for
CSI.
We
have
a
creeks
great
snapshot,
call
and
a
three
volume
from
snapshot
call,
and
this
is
work,
that's
in
progress
and
should
be
landing
as
part
of
111.
A
We
had
a
discussion
on
topology
and
local
TV.
This
is
scheduling,
storage,
I'm,
different,
well
different
zones,
different
racks,
making
the
share
the
storage
lands
were
the
computer
happens,
or
at
least
close
proximity
to
it.
The
this
is
another
design
presentation
and
it's
something
that
we're
working
on
for
111
as
well
and
there's
some
CSI
s--
touch
points
that
are
still
in
progress.
A
Next
topic
was
local
storage.
This
one
we've
been
adding
to
over
the
last
two
or
three
releases
and
we've
the
design
is
pretty
stable.
I
think
the
big
piece
that
we
are
working
on
right
now
is
how
how
we
report
from
a
node,
the
capacity
of
local
storage
that
can
be
provisioned
there,
and
so
the
local
storage
is
different
in
that
it
breaks
our
current
DB
PVC
binding
controller,
and
it
requires
binding
done
close
to
the
scheduler
time.
A
So,
if
you're
scheduling
memory
CPU
now
we
also
have
to
schedule
to
have
a
scheduled
predicate
on
this
space.
We
have
to
locate
nodes,
actually
handle
those
three
things
and
then
schedule
a
pod
there
and
find
the
pvp
you
see
at
the
same
time.
So
this
is
work
in
progress
and
we're
still
making
pretty
good
headway
on
it.
A
Our
next
feature
that
we've
talked
about
is
fine
expansion,
or
this
is
resized.
Without
the
shrink
we
have
an
offline
resize
that
we've
made
available
110
and
then
we're
working
on
online
resize
for
one
11i
there's
a
couple
technical
details
on
the
screen,
but
the
majority
of
work
and
111
is
reaching
feature
parity
in
CSI
and
that
involves
having
a
controller,
expand
drawings
and
a
node
expand
binds
to
the
CSI
spec.
A
Our
NICs
topic
was
around
testing
and
test
coverage.
We've
done
a
pretty
large
inventory
of
our
tests
as
part
of
the
sig,
and
also
what
we're
missing
and
adding
new
tests
as
well
as
phase
zero.
Then
it
is
phase
one.
We
want
to
make
sure
that
these
tests
are
automated
and
running
to
see.
Item
we've
had
quite
a
few
technical
obstacles
around
this,
primarily
in
my
own
name,
space,
propagation
and
different
host
drivers
required
to
actually
run
these
tests
and
all
the
different
environments
we
have
tests
running
in
Google,
Cloud
I.
A
Our
hope
is
to
someday
have
all
these
tests
running
and
other
cloud
providers,
and
our
current
plan
to
execute
on
this
is
to
start
with
VMware
since
mewhere
is
very
well
represented
in
six
storage
and
they've
offered
resources
to
for
us
to
run
these
tests
and
then
once
we
get
everything
running
on
the
vmware
cloud,
we
will
use
that
as
kind
of
a
blueprint
to
ask
the
other
cloud
providers
to
learn
and
execute
the
same
test.
Reliability.
We
are
asking
for
a
six-month
commitment
from
vendors
who
want
to
be
part
of
the
automated
CID
testing.
A
Next,
we
discuss
operators.
This
is
becoming
a
pretty
big
topic
in
six
storage.
We
have
a
lot
of
systems
that
contain
data,
that's
either
copied
or
extrapolated
from
other
systems,
and
we
want
to
keep
the
two
in
sync
and
this
operator
pattern
is
the
birds
is
a
very
popular
and
easy
way
to
do
this.
We
have
a
couple
operator
patterns
existing
already
in
our
external
storage
provisioners
and
then
some
of
the
snapshots
work
there's
some
snapshot
controllers
that
use
the
same
library
but
come
to
find
out.
A
There's
quite
a
lot
of
potential
operator
frameworks
that
we
can
use.
So
we
had
some
using
some
demos
of
what
other
people
are
using
for
operators
on
those
lines
we
were.
We've
talked
about
breaking
up
the
external
storage
repo,
which
uses
one
common
library,
and
we
would
explore
the
idea
of
using
the
shared
library
going
forward,
but
we've
determined
that
that's
untenable
in
the
each
project
coming
out
of
cig
storage
would
probably
use
its
own
its
own
operator
library.
A
A
A
few
metrics
to
assist
our
s
Ari's
and
taking
corrective
action
and
we're
also
looking
at
ways
to
add
metrics
specifically
from
the
cloud
providers.
So
things
like
number
of
volumes
attached
to
a
disk
we'd
like
the
cloud
provider
to
add
that
information
so
that
we
can
look
at
what
communities
thinks
is
attached
to
a
attached
to
a
node
and
then
take
some
corrective
action
if
needed.
A
A
A
A
So
we've
been
working
to
identify
project
donors
for
each
project
and
move
the
projects
to
someplace
else
and
in
their
own
repository,
so
we
went
through
I
would
say
we
we
have
about
3/4
of
our
projects,
don't
have
an
identified
owner
and
it's
about
the
same
five
people
that
are
running
the
majority
of
the
other
projects.
So
if
people
from
the
community
are
interested
or
have
storage
projects
and
that
external
storage
repo
that
haven't
been
present
on
the
six
storage
calls.
C
A
Take
ownership
of
and
then
as
a
second
part
of
this,
we
talked
about
the
entry
volume
drivers
and
how
we
would
move
them
out
route,
3,
I,
don't
know,
there's
15
or
so
entries
find
drivers
and
we
want
to
make
them
CSI
plugins.
At
some
point,
the
CSI
plugins
would
be
container
Orchestrator
agnostic,
so
it's
not
a
clear
fit
into
the
kubernetes
sphere
of
influence,
but
since
most
of
these
drivers
would
be
probably
authored
and
maintained
by
communities,
contributors
we're
looking
at
the
proper
place
to
hone
these.
A
B
A
B
Will
okay,
I'm,
really
exciting
stuff
from
sick
storage?
I'm
gonna
make
another
call
for
Jim
Sinclair
yeah,
terrific
I'm
glad
you
made
it
okay
I'm
theories.
My
computer.
C
C
B
So
firsts
do
shoutouts,
as
previously
mentioned
shoutouts
to
a
lot
of
the
SV
cat
team
for
bringing
on
in
mentoring
some
new
contributors
to
that
project
and
helping
them
a
lot.
I
shout
out
to
Mike
spleen
for
running
the
Boston
kubernetes
meetup,
the
also
to
Paris
for
helping
with
mentoring.
We
have
some
help
wanteds
in
here.
B
D
Yes,
if
anybody
on
the
line
is
fund,
end
developer.
That
would
like
to
help
out
with
a
dashboard.
They
actually
said
that
they
gave
you
some
back-end
developers
to
they're
going
for
an
angular
transition
and
they
will
do
one-on-one
mentoring.
Today
they
had
a
update
call
and
it
will
be
posted
later
so
I'll
post
the
link
in
the
agenda,
but
also
join
their
cig.
You
I
slack
channel
for
more
info.
D
Yep,
we
are
currently
doing
a
drive
for
contributor
Mentors,
so
if
you're
interested
and
yes
you
can
you
don't
need
to
be
a
maintained,
you
can
be
just
a
member
of
the
organization
that
is
completely
fine.
We
have
seven
programs
right
now
that
half
of
them
are
testing
the
other
half
are
launched.
One
of
one
of
them
that's
launched
is
meet
our
contributors.
That
is
a
once
a
month,
youtube
series
for
folks
to
ask
mentor
like
questions
to
other
contributors.
It's
very
fun
and
we
have
two
times
and
that's
the
next
one.
D
It's
actually
June
6th
at
I
believe
so,
if
you're
interested,
please
feel
free
to
fill
out
that
form.
That
form
is
actually
good
for
all
mentors
and
mentees.
It's
a
one-stop
shop
form,
but
would
love
to
have
anyone
that
is
interested
in
getting
more
contributors
into
kubernetes
and
any
parts
of
the
project.
C
C
C
On
policy,
not
much
has
updated
since
the
last
sigilyph
update,
we're
still
working
through
the
scheduling
policy
proposal
and
kind
of
thinking
about
the
sort
of
the
long-term,
the
broader
policy
strategy
in
communities.
In
terms
of
what
things
we
want
to
have
domain-specific
built
in
policies
for
and
what
things
we
want
to
defer
to
third-party
projects
and
plugins
or
more
sort
of
generic
policy
languages
like
the
open
policy
agent,
for
instance,.
C
C
There
were
cases
where
the
the
nude
was
adding
taints
to
itself,
and
so
we're
still
working
through
some
of
the
details
of
how
to
kind
of
continue
to
support
those
use
cases
without
allowing
nodes
to
to
remove
taints
and
attract
those
sensitive
workloads
on
security
conformance.
There's,
a
cap
open
right
now
around
the
security
profiles.
C
This
is
by
yeast,
suite
or
ease
way
is
his
github
handle,
and
this
is
looking
at
how
we
might
do
a
better
job
of
testing
security
properties
of
the
cluster
and
then
also
tying
it
to
the
security
profiles
idea,
which
is
a
way
of
kind
of
grouping,
different
configurations
together
to
say
you
know
these
are
the
best
practices
that
you
should
always
use.
These
are
the
sort
of
more
restrictive
settings
that
you
might
use
if
you
need
a
hardened
cluster,
for
instance,
and
then
associating
different
conformance
tests
with
those
different
profiles.
C
So
this
is
very
much
a
work
in
progress.
Probably
won't
land
for
111,
maybe
112
and
there
haven't
been
any
hasn't,
been
any
progress
on
the
bug.
Bounty
program,
we're
still
working
on
improving
our
security
release
infrastructure
before
we
move
forward
with
that,
I
think
those
are
all
the
updates.
I
have.
Are
there
any
questions
about
that.
B
Oh
okay,
thank
you
very
much
for
making
the
meeting
the,
and
that
is
all
we
have
for
this
week
unless
something
came
up
that
urgently
needs
to
be
shared.
I
want
to
thank
our
note,
takers,
Tim
pepper,
also
muku
lika.
If
I'm
pronouncing
your
name
correctly,
sorry,
we've
never
met
if
I'm
angling
that
who
also
hooked
that
with
the
notes.
So
thank
you
very
much
for
taking
notes
for
everyone
and
I
will
see
you
all
next
week
and
unlock.