►
From YouTube: Kubernetes Community Meeting 20190404
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
A
Hi
welcome
to
the
kubernetes
community
updates.
For
this
week
my
name
is
Valerie
Lancia
I
work
at
lyft,
doing
a
mix
of
downstream
and
upstream
urban
areas
work.
Today
we
have
a
pretty
busy
schedule.
We
have
our
usual
sig
updates.
We
have
information
about
the
latest
release
and
we
have
two
demos
so
in
just
a
sec,
I'll
kick
it
over
to
the
demo.
First,
a
reminder
we
do
have.
The
CNCs
community
could
have
conduct.
Make
sure
that
you
abide
by
that
when
participating.
We
could
simply
sum
it
up
as
be
excellent
one
another.
B
Can
see
thanks
for
having
me
I'll
try
to
keep
this
short
ten
minutes.
I
just
did
a
meet-up
on
kts
two
hours
long,
so
I'm
really
gonna
convince
those
condenses
down,
but
basically
so
what
I'm
presenting
here
is
k3s
or
kiiis.
However,
you
want
to
say
it
a
lot
of
people
ask
what
the
name
means
doesn't
actually
mean
anything.
It's
just
kind
of
a
geeky
thing
of
kubernetes
KS,
which
10-letter
word
so
k3s
would
be
a
five
letter
word,
which
is
half
half
the
size.
B
So
we
want
our
lightweight
kubernetes,
so
we
picked
a
shorter
name,
so
I'm
Darren
Shepherd
from
Ranger
labs.
Yes,
co-founder
chief
architect,
do
my
best
participate
in
the
community
and
do
a
lot
of
stuff
so
so
basically
be
k3s.
What's
the
basic
goals
of
this,
it's
really
for
running
kubernetes
and
resource
constrained
environments
from
we've
done
a
lot
of
marketing
around
edge.
That's
an
obvious,
like
immediate
business
use
base.
We
have
but
there's
a
lot
more
things
that
we
can
do
besides
just
edge.
B
It's
just
the
basic
idea
of
you
know
we
think
to
manage
kubernetes
offerings
like
cheap
a
eat,
a
SaaS.
You
know
those
are
great,
but
there's
other
use.
Cases
for
kubernetes
is
especially
kubernetes
architecture.
Our
operators,
you
know,
there's
a
lot
of
things
we
can
do
with
kubernetes
in
different
environments.
B
So
these
are
things
like
you
know:
engine
IOT
or
Devon
test,
but
there's
also
other
use
cases
like
single
app
clusters,
where
you're
going
to
end
up
with
like
a
lot
of
clusters.
That
just
might
be
a
couple
servers
or
you
want
to
embed
kubernetes
effectively
into
an
application
like
you
want
to
be
able
to
deliver
micro
service
application
to
a
customer
and
and
it's
powered
by
kubernetes,
but
you
kind
of
want
to
package
kubernetes
with
it.
B
There's
a
lot
of
use
cases
we
see
like
that,
like
only
in
software
and
so
k3s
was
just
designed
to
I
think
the
best
way
to
describe
it
for
people.
You
know
like
the
Java
world,
is
our
goals
kind
of
become
to
become
like
the
jetty
of
kubernetes
like
we
just
want
to
be
the
lightweights
slimmed-down
kind
of
flexible,
embeddable
kubernetes?
B
So,
basically
you
know
what
does
that
mean
like?
Fundamentally,
what
are
we
doing
with
this,
and
so,
first
and
foremost,
were
a
lightweight
Cooper
days
distro.
What
that
mostly
means
is
that
we
reduce
the
memory,
the
kind
of
base
memory
footprint
and
that
honestly
just
means
removing
a
lot
of
code.
I'll
go
into
the
next
slide
and
detail
what
we
removed,
but
hopefully
it
shouldn't
be
anything
that
causes
any
big
impact.
B
Sorry,
there
was
one
point
that
I
forgot
to
make
on
this
last
slide
was
that
everything
we're
doing
with
Kathy
res
is
intended
to
be
production,
qualified
quality
and
fully
certified.
So
we
are
RDC
and
CF
certified.
We
passed
all
that
certification
and
we
did
that
with
day
one
and
we
are
going
towards
production.
So
this
is
not
just
Devin
test,
so
it's
important
that
you
know
kind
of
everything
works.
So
when
I
talk
about
making
we've
ripped
out
code
and
whatnot,
it
doesn't
kind
of
hinder
the
functionality
of
kubernetes.
B
B
Anytime,
you
say
word
simple:
that
means
that
you,
basically
you
know,
there's
pros
and
cons
are
simple.
So
we've
we've,
you
know
done
our
best
of
sane
defaults
and
secure
by
default,
but
we
obviously
make
choices
for
the
user,
so
there
are
specific
choices
and
things
that
that
we've
done.
We
try
to
make
it
so
that
there's
a
default
say
behavior,
and
then
you
could
turn
it
off
and
customize
it.
B
For
the
most
thing
is
like
what
we
did
to
remove
stuff
is
we
removed
like
legacy
and
non
default
features
like
so
it
means
a
lot
of
omission
controllers
that
are
not
on
by
default.
Most
Bob
providers
don't
allow
you
to
customize
them.
You
know
you
really
shouldn't
be
used
using
a
lot
of
these
things,
because
I
kind
of
impact,
your
portability,
so
we
removed
a
lot
of
omission
controllers.
We
removed
some
of
the
really
old,
api's
I,
just
aren't
being
used
anymore,
and
we
really
don't
see
any
use
in
the
wild.
B
All
alpha
features
are
dropped
and
then
anything,
basically,
that's
like
an
entry
driver
or
plug-in
that
has
an
out
of
tree
equivalent.
We
got
rid
of,
and
so
that's
like
cloud
providers,
drivers,
storage
drivers.
So
if
you,
if
we
don't,
have
a
huge
you
space
in
k3s
for
cloud
providers
because
that's
not
our
target
is
the
cloud
you
know
you
might
as
well
just
use
UK
or
whatever.
B
But
if
you
want
to
use
a
cloud
provider
you
will
be
using
the
external
ones
and
then
for
storage
is
just
pretty
much
all
CSI
and
then
the
last
thing
as
we
took
out
docker
and
we
by
default,
run
with
container
D.
You
know
any
CRI
runtime
word,
but
we
package
by
default
and
that
ends
up
cutting
out
quite
a
bit
of
memory.
So
those
are
things
we've
ripped
out.
The
things
we
added
is
there's
a
wrapper
that
makes
the
installation
really
easy.
You'll
see
that
demo
we
use
sequel
Lite
instead
of
sed.
B
Well,
it's
it
is
in
addition
to
Etsy
D,
so
this
yo
makes
it
even
smaller
and
simpler
and
whatnot.
For
these
resource
constrained
environments,
we
do
all
the
TLS
management
pretty
much
like
automatically
I
was
a
built-in
functionality
to
automatically
deploy
manifest
L
charts.
So
it
just
makes
it
really
easy
to
package
up
an
application
with
this,
because
you
just
need
to
package
up
like
you
just
have
the
case.
B
B
This
is
kind
of
like
your
basic
architecture.
I
assume
everyone
here
in
this
call,
even
though
scooper
days
architecture,
but
the
things
that
we
did
differently
is,
if
you
can
see
the
process,
is
we
combine
things
into
single
processes
again
like
if
you're
running
your
logical
cluster
obviously
makes
sense
that
these
as
separate
processes?
If
I'm,
writing
a
smaller
scale
cluster,
it
doesn't
matter
so
much.
So
we
didn't
break
any
of
the
control
data.
Plane,
separation
type
things,
but
but
we
did
combine
processes.
So
it's
like
the
two
proxy
in
the
couplet
or
one
process.
B
Flannel
is
actually
embedded
in
the
same
process,
but
one
of
the
key
things
that
we
did
is
this
tunnel
proxy.
If
you
look
at
this
diagram,
is
the
agents
only
make
outbound
connections
to
the
server?
So
we
have
this
tunnel
proxy
so
basically
establishes
along
with
tunnel
to
the
server,
so
we
can
do
reversed
connections
through
it.
B
B
B
These
machines
are
armed,
64,
I,
don't
think
I've
mentioned
this
before
but
like
by
default.
We
support
x86,
you
know,
Intel
64-bit,
arm
64
and
our
view
set,
and
so
all
those
are
tested.
Binaries
are
immediately
available
with
multi
multi
arc,
binaries
or
multi
arc
images,
okay.
So
in
order
to
get
so
basically,
if
you
want
to
run
sorry
to
run
k3s,
you
just
may
be
downloading
the
binary,
you
can
get
it
from
our
release
page.
B
So,
let's
see
alright
good,
so
download
that
binary
so
you're,
basically
just
going
to
run
a
3's
server.
So
I'll
show
you
the
help
here,
so
you
can
see
what
this
binary
is.
It's
just
basically
server
agent
cute
CTL.
These
are
the
things.
So,
if
you
want
to
run
the
server,
the
server
includes
API
server,
controller
manager
and
scheduler,
so
this
will
get
the
server
up
and
running.
It
takes
a
second
for
that
to
come
up
while
that's
coming
up
just
as
I'm
pressed
for
time
here.
B
If
you
want
to
join
like
when
you
launch
the
server
by
default,
it
registers
an
agent.
On
the
same
note,
so
you
get
a
cluster
if
you
want
to
join
another
node
to
the
cluster,
you
just
run
this
K
through
its
agent
command,
which
it
tells
you
on
startup,
and
you
have
to
get
the
node
token
from
this
file
background.
That.
B
Super
simple
so
before
I
run
that,
let
me
just
show
you
real,
quick,
k3s,
Q,
CTL
node,
oh
I,
see
oh
I,
think
I
still
I
did
a
test
before
so
that
they,
it
shows
two
nodes
before
that.
One
is
actually
joined.
I
think
I
left
the
database
so
there
but
anyway.
So
so,
once
you
run
that
K
through
his
agent
command.
It
then
joins
yes
and
I
have
a
full
cluster.
If
I
look
at
what's
in
cube.
B
If
I
look
at
what's
in
kubernetes
by
default,
what
we're
deploying
because
we
try
to
make
this
fully
functional?
Basically
we
package
Cortney
and
ask
that's
there
by
default,
we're
using
traffic
as
an
ingress
controller.
So
you
automatically
ingress.
We
have
this
host
port
based
service
load
balancer,
so
the
service
load
balancer
works.
It
just
will
use
host
ports
which,
for
a
lot
of
our
use
cases
that
make
sense
so
pretty
much.
B
So
if
you
actually
want
to
install
this,
I
do
actually
recommend
the
curl
script.
It's
just.
You
can
download
the
script
and
run
it
yourself.
You'll
have
to
go
through
curl
fianlly
to
care,
but
if
you
run
the
actual
curl
script,
it's
gonna
do
a
better
job
of
installing
because
they'll
set
it
up
in
system
B.
So
let
me
just
show
you
this
and
then
my
demos
done
sorry.
B
B
If
you
actually
run
the
curl
script,
that's
actually
going
to
set
up.
You
know
a
proper
proper
system
be
seduced
status
k3s.
It
also
sets
up
some
sim
links,
so
you
don't
have
to
type
k3s
cubes
ETL.
Instead,
you
can
just
type
youtube-dl
node,
but
there
you
go
it's
up
and
running.
That's
pretty
much
the
little
demo
thanks
for
listening
the
websites,
p3s
that
I
oh
there's,
the
our
github.
It's
pretty
active.
We
just
launched
this
about
a
month
ago
and
we've
got
a
huge,
really
positive,
good
response.
D
B
B
A
E
E
Hello
folks,
my
name
is
Prasad
I'm
from
in
problem
technologies
unit,
which
is
one
piece
partner
in
India,
so
today
I'll
be
talking
about
what
cube.
So
this
is
the
agenda
for
today's
talk,
we'll
discuss
what
we
are
trying
to
solve,
or
what
QA
helps
in
studying
the
dishes.
Then
we
will
see
intent,
work
from
this.
What
cube
architecture,
how
you
can
write
a
filter
and
order?
Okay?
E
So
there
should
be
an
easy
and
quick
way
to
debug
our
community
supplications,
and
if
I
have
multiple
clusters,
then
there
should
be
a
common
platform
by
which
I
can
access
the
any
required
cluster.
And
so,
when
I
clear
it
out
when
it's
the
resource,
then
I
should
get
a
recommendation
about
the
best
practices
to
create
the
resource,
so
hope
what
can
helps.
So
what
TV
is
a
basically
slack
app
which
monitor
communities
cluster,
which
you
can
also
use?
E
What
give
to
debug
your
kubernetes
deployment
and
what
people
servants
the
specific
checks
on
for
increased
resources
and
provides
you
the
best
tactic
or
recommend
this
of
implementations
about
the
best
practices
for
creating
that
resource?
Okay.
So,
rather
than
talking
too
much
about
broad
cube,
I
will
show
you
what
giving
action.
Yes,
so
I
have
a
workspace
which
has
what
give
installing
it
and,
if
I,
do
what
it
being
I'm
getting
response
from
the
body
backends.
So
this
means
I
have
two
backends
for
this
application.
One
is
in
cheek
a
stage
cluster.
E
E
I'm
getting
notification
about
the
lifecycle
events.
Okay,
now
I
can
see
the
odd
bodies
sale
now.
So
this
is
how
what
to
helps
you
monitoring
your
deployments
and
if,
if
I
want
to
debug
this
application,
then
I
can
also
so
how
about
you
basically
helps
you
in
debugging
the
applications.
If
you
see
help,
then
so
what
you
can
execute
Cupidon
come
on
as
it
is
using
what
you
so
I'll
just
do
what
you
get
parts
okay,
so
I'm
getting
response
from
the
word
Q,
so
I
do
want
to
check
logs
of
their
failure.
E
E
So
with
the
lifecycle
events,
I'm
also
getting
the
recommendations
saying
that
let
us
tag
should
be
already
and
I'm
actually
creating
this
part
without
any
levels,
so
I'm
also
making
the
recommendation
that
you
should
also
add
levels
to
the
power
okay.
So
this
is
how
you
can
debug
your
commander
kiss
applications
right
from
the
slack
window.
E
E
E
So
it
solves
our
monitoring
issue
and
it
also
provides
weak
way
to
debug
your
application
and
also
gives
you
the
commendation
about
the
best
practices.
So
this
is
the
end-to-end
workflow
for
what
application.
So
what
you
can
be
divided
into
two
parts
one.
Is
you
install
slack
app?
What
can
slack
up
in
your
slack
box
TIF
and
you
need
to
install
bottom
back-end
in
your
communities
cluster,
so
one
slack
app
can
have
multiple
black
backends
as
I
showed
in
the
demo.
E
So,
depending
on
your
configuration,
what
cube
registers
inform
us
on
the
API
server
and
this
is
for
the
quality
Stephens
once
it
gets.
Events
from
Community
Safety
server.
It's
forwards.
Those
notifications,
like
it,
select
your
slack
workspace
or
slack
channel
okay.
So
you
can
look
for
two-way
communication.
You
work
you
back
in
listens
for
the
messages
from
slack
ramp
and
runs
what
you
run
stupid,
Atomics
and
changes
force
back.
So
what
Q
has
read-only,
serviceable
or
I
can
say
you?
E
Can
you
can
control
the
or
restrict
the
execution
of
certain
commands
or
restrict
access
to
the
certain
resources
using
the
rules
you
bind
through
the
service
account
or
what
you
okay,
so
going
deep
into
the
what
you
backed
in
architecture?
So
in
what
you
back
end,
we
have
informal
controller
which
which,
depending
on
your
configuration
register,
inform
us
on
PvP
or
server
once
you
get
even
from
QAPI
server
it
we
have
event
manager
which
transform
standard
abilities
event
into
bot,
giving
this
by
getting
only
required
information.
E
From
this
event,
we
also
filter
engine
which
does
filter
on
the
command,
expects
and
add
recommendations
to
the
both
give
event.
Okay,
so
once
the
be
run
the
filter,
the
notifier,
actually
sends
you
know
what
give
event
to
stack
channel
for
two-way
communication.
We
are
using
slack
our
cam
plan,
which
listens
for
the
message
from
slack
and
the
executors
exhibit
ultimate,
and
since
it's
was
that
okay,
so
this
is
the
configuration
you
have
to
throw
it
forward
give
back.
It
I
think
this
is
too
small,
so
yeah.
E
So
what
if
has
very
fine-grain
configuration,
so
you
can
configure
only
you
can
configure
the
resources
you
want
to
watch
from
each
name
spells
you
want
to
watch
and
the
required
lifecycle
events.
You
want
to
get
notifications
about.
Ok,
you
can
also
configure
the
warning
or
error
levels
of
the
units.
You
can
turn
up
the
recommendations
if
you
want
and
then
you
have
to
provide
the
settings
or
access
token
for
the
slack
communication,
basically
yeah.
E
So
this
is
the
standard
workflow
for
writing
your
own
Twitter.
So
we
have
a
filtered
interface
you
you
can
either
go
function
to
satisfy
the
filter
interface,
which
will
run
checks
on
the
community
specs,
depending
on
your
policies
and
add
recommendations
to
the
board,
cube
events
so
yeah.
This
is
the
roadmap,
so
the
other
we
have
added
that
most
support
in
your
latest
release
will
be
also
adding
integrations
with
other
benchmark
solutions.
Like
then
cube
page
at
all,
and
we
also
try
to
provide
support
for
dynamically
loading
of
filters
yeah.
E
E
A
C
Hello
hopping
on
today
to
introduce
myself
to
the
community
since
I:
don't
normally
attend
this
meeting,
but
we'll
start
doing
so
to
give
115
updates
I'm
gonna,
be
the
release
lead
for
115.
Formerly
I
was
the
enhancements
lead
in
114
police
lead
for
115.
Sorry,
if
I
confuse
those
numbers,
yeah
we're
gonna
be
planning
to
kick
off
the
release
cycle
next
Monday.
So
for
all
the
SIG's
out,
there
start
expecting
to
hear
more
from
myself
in
our
incoming
enhancements
lead
Kendrick
around
what
different
enhancements
are
being
tracked
for
this
cycle.
C
We
are
going
to
keep
doing
the
cut
process
that
we
did
in
114,
so
every
single
enhancement
must
have
a
cup
in
an
implementable
state
by
enhancement,
freeze
or
it
will
be
removed
from
the
115
milestone
yeah
and
if
you
have
any
questions
for
me,
feel
free
to
ask
them
here
or
reach
out
the
slack
in
cig
release.
Also
as
a
warning,
I
have
no
cut
t-shirts,
so
I
will
not
be
able
to
one-up
Aaron
on
that.
One.
A
F
F
A
F
F
Collating,
okay,
so
yeah
I'm
Derrick,
our
I
co-chair
sig
note
with
dawn,
Chen
and
I'm
here,
to
give
our
community
update
and
if
people
have
questions
feel
free
to
interrupt
and
ask
away.
So
what
did
we
do
last
cycle?
So
we
have
a
recurring
theme
in
Cigna
to
try
to
improve
both
the
reliability
and
support
for
more
workloads.
So,
on
the
improved
support
standpoint,
we've
done
a
lot
of
work
with
sig
windows
to
help
graduate
windows.
Nuts
support
to
GA.
F
So
I
was
a
big
accomplishment
and
know
something
that's
been
going
on
for
a
long
time
in
the
community,
and
so
we're
really
happy
to
see
that
as
well
on
supporting
more
workload,
diversity,
there's
been
an
effort,
that's
been
underway
for
a
while
around
the
concept
of
a
runtime
class.
For
those
who
aren't
familiar
a
runtime
class
is
basically
allowing
Karuna
needs
to
be
aware
of
a
particular
runtime.
F
You
need
to
run
your
pods,
so
if
you
would
like
to
isolate
your
pod
in
something
like
coda
or
Jeeva,
Z
or
run
C
and
admin
can
use
a
runtime
class
concept
to
make
that
known
to
kubernetes
and
that
particular
feature
had
to
beta
as
well.
On
the
reliability
front,
we
closed
the
loop
on
a
a
long-standing
issue.
We
had
around
being
able
to
limit
kids
so
while
we've
been
able
to
detect,
hid
pressure
and
we
actually
had
a
while
we're
was
not
working
right.
We've
closed
that
out
in
this
past
release.
F
So
we
now
have
a
feature
gate
that
is
beta
and
on
by
default,
where
you
can
have
the
qubit
put
a
cap
on
the
number
of
Pizza
given
pod
may
consume,
and
so
that
pod
can't
fork
bomb
your
entire
bluster.
And
then
we
have
a
new
feature
as
well,
which
allows
you
to
reserve
a
set
of
pits
for
your
node
agents,
similar
to
how
you
can
reserve
CPU
and
memory,
and
that
feature
was
introduced
as
alpha
and
we're
looking
to
accelerate
its.
It's
moved
to
beta
Shirley
and
then
as
well
like
every
release.
F
We
do
a
number
of
bug
fixes
to
try
to
just
improve
reliability,
because
the
other
day,
a
lot
of
people
are,
depending
on
the
cubelet
to
run
their
workloads
reliably
and
then.
Finally,
we've
had
a
number
of
features
that
lingered
in
beta
for
a
long
time
that
we
graduated
to
GA.
Just
does
it
overall
cleanup
and
so
one
was
around
huge
pages.
F
So
we
have
a
capability
to
consume
pre-allocated
huge
pages
in
your
pod,
and
we
moved
that
to
GA
after
sitting
in
beta
for
three
or
four
releases,
and
then
the
labels
we
use
to
describe
the
operating
system
and
the
architecture
of
the
node
also
moved
to
GA.
And
this
is
probably
important
for
folks
to
make
sure
they
read
the
release
notes
on,
because
the
old
labels
will
go
away
in
a
future
release.
F
We're
still
working
through
our
plans
for
the
upcoming
cycle
and
give
a
link
to
our
areas
on
your
discussion,
which
would
folks
want
to
review
the
deck
afterwards.
They
can
but
they're
kind
of
eight
high
level
areas
we're
exploring
and
still
working
through.
So
the
one
is
just
code
cleanup,
so
the
CRI
is
an
API
that
allows
you
to
plug
in
different
container
runtimes
behind
the
cubelet.
It
has
existed
in
the
cube,
cube
repo
for
a
long
time
and
it's
now
moved
to
its
own
dedicated
repositories.
So
that's
a
good
thing.
F
For
a
long
time,
we've
been
trying
to
improve
performance
for
performance,
sensitive
workloads
and
so
there's
some
great
work
that
is
coming
from
I
guess,
our
friends
at
Intel
and
NVIDIA.
What
we're
trying
to
improve
CPU
and
device?
Topology
alignment
so
that
if
you
have
latency
sensitive
workloads
or
GPU
sensitive
workloads,
the
keyboard
does
it
better
dup
scheduling
you
and
the
kept
for
that
is
I,
think
an
implementable
States.
Hopefully
we
can
start
getting
the
code
endless
release.
F
Then,
while
we
talk
about
runtime
class
graduated
to
beta
last
release,
we're
trying
to
do
some
work,
this
release
to
improve
topology
awareness
scheduling
around
that,
and
so
this
is
a
kept.
That's
been
going
on
with
scheduling
and
then
unique
to
runtime
class
is
for
some
runtimes
like
say
you're
using
run,
see
the
overhead
of
running
your
container
is
is
relatively
small,
but
if
you
run
your
container
in
a
in
a
different
type
of
container
isolation
technology,
sometimes
that
overhead
is
much
larger,
and
so
you
need
to
account
for
it.
F
This
released
as
well
as
this
long-standing
feature
request.
I
did
the
bug
pod
support,
which
hopefully,
we
can
continue
to
move
forward.
So
a
lot
of
interesting
stuff
going
on
in
signo
de
if
people
want
to
join
and
participate
and
if
those
topics
are
of
interest
or
if
there
are
topics
you
didn't
see
that
you'd
like
to
raise
we're,
always
welcome
to
getting
input.
F
So
how
do
these
plans
affect
you
so
I
think
the
pod
overhead
proposal
probably
has
some
relationship
to
sig,
auto
scaling
in
particular
for
anything
that
might
be
going
on
on
around
a
vertical
pop,
auto
sizing
and
the
desire
to
be
able
to
support
in
place
updates
of
pod
resources
is
so
that
if
you
do
put
like
a
vertical
pod,
autoscaler
against
a
pod,
that's
running
rather
than
getting
a
whole
new
pod.
When
you
resize
your
request,
making
the
vertical
pod
autoscaler
aware
that
pods
could
potentially
resize
is
and
not
need.
F
If
any
of
these
topics
discussed
here
of
interest,
we'd
love
input
from
from
everyone,
both
sig
members
and
users
alike,
on
the
cap.
So
as
we
finalize
a
roundup
of
some
of
our
sub
project
status,
there,
like
I,
said
previously,
the
CRI
API
has
moved
into
its
own
separate
repo,
so
I
guess,
if
you're
interested
in
building
against
the
CRI
look,
look
here
and
you'll
see
evolution
so
CRI
happen
in
that
repo
first,
the
prior
project
had
previously
been
sponsored
by
sig
note.
It
recently
got
accepted
into
the
CNC
f.
F
The
project
is
in
the
process
of
moving
and
the
project
will
soon
be
housed
in
this
new
github
location.
So
if
you
were
a
user
or
a
contributor
to
the
cryo,
sub-project
you're,
probably
already
aware
of
this,
but
there
will
be
a
new
home
and
congratulations
to
the
cryo
sub
project
for
their
adoption
in
de
ciencia.
F
The
CRI
tools
project
is
another
sub
project
at
Sigma
sponsors
and
it's
continuing
to
just
improve
this
CLI
and
validation
tools
for
debugging
any
CRI
based
runtime.
So
if
you
are
running
kubernetes
in
production,
using
a
particular
CRI,
you're,
probably
already
familiar
with
debugging
using
CRI
CTL,
but
if
you
have
enhancement
requests,
please
bring
them
forward
to
this
repo
and
for
the
main,
the
main
point.
Most
of
the
work
that's
happening
here
is
just
keeping
up
with
any
latest
changes.
F
This
year,
I
cube
releases,
the
node
feature,
discovery
project
continues
to
evolve
and
I
guess
just
detect
more
things.
So,
if
wanting
to
understand
and
inventory
what
hardware
is
on
your
nodes
when
you're
running
productions,
so
you
can
schedule
to
them
differently.
Please
join
the
contorting
community
here
in
node
feature
discovery.
F
Discussions
at
this
point,
we've
decided
to
expire,
the
workgroup
for
the
moment
and
focus
on
actually
executing
on
delivering
the
less
previously
defined
items
in
normal
sig
node
operations,
and
particularly
hoping
to
focus
on
CPU
and
device
alignment
for
everyone
who
did
participate
in
that
working
group.
A
big
thank
you,
I
think.
F
A
G
Can
you
hear
me
yes,
okay,
I'm
Sean,
Sullivan
I'm,
the
co-chair
for
the
command
line,
interface,
sig
and
I'll
give
a
quick
update,
so
we
we
updated
less
than
three
months
ago
in
mid-january,
so
this
will
be
what's
relevant
over
the
last
three
months.
So
just
a
quick
reminder
of
what
the
sig
CLI
is,
what
our
command
line
interface
sig
is.
G
We
we
focus
on
command
line
tools
and
libraries
that
interface
with
the
kubernetes
api
is
and
here's
a
list
of
our
sub
projects.
You'll
notice,
the
last
one
is
named
crew,
so
that
crew
has
joined
as
a
sub-project
of
the
six
CLI
within
the
last
three
months
and
I've
got
some
more
information
on
that
coming
up,
but
first
things.
First,
we
have
a
new
logo
for
coop
cuddle
and
thanks
to
Ashley
McNamara
for
creating
creating
this.
G
Okay,
so
what
are
the
announcements
from
the
sick
CLI?
So
one
of
the
more
important
ones
is
that
coop
cuddle
is
moving
out
of
core
and
into
its
own
repo.
This
is
going
to
affect
a
couple.
Other
sakes,
I
have
more
information
on
that.
Coming
up,
as
I
mentioned
before,
the
crew
plug-in
manager
has
become
a
sub-project
of
six
CLI
coop
cuddle
applies,
server-side
is
alpha
in
114.
G
It
requires
enabling
a
feature
gates
on
the
API
server.
There's
some
new
documentation
on
coop
cuddle,
the
coop
cuddle
book,
which
again
I'll
have
more
information
on
in
the
future.
We
want
to
mention
also
here
that
the
export
flag
for
coop
cuddle
get
is
deprecated.
It
will
be
out
in
in
about
a
year,
so.
G
So
here's
some
more
information
on
our
effort
to
move
coupe
cattle
out
of
core,
so
in
the
near
future
there
will
be
a
kept
describing
the
steps
and
we
will
welcome
the
feedback
from
the
community
on
that
this.
This
effort
is
mostly
going
to
affect,
we
believe
sig,
release
and
SiC
testing,
and
we
will
especially
be
soliciting
feedback
from
those
SIG's
and
coordinating
with
those
SIG's
during
this
process.
G
As
part
of
our
coop
cuttle
independence
effort,
we
would
we
have
deprecated
hoop
cuddle
converts
since
this
command.
Can
it
contains
dependencies
that
kind
of
tie
coop
cuddle
to
core
we.
We
basically
can't
can't
have
this
command
anymore.
If
we're
gonna
pull
it
out,
this
command
will
be
removed
by
117.
G
G
So
we're
happy
to
announce
that
crew
is
a
sub-project
of
a
sick.
Cli
crew
is
for
kind
of
the
day,
two
and
day
three
efforts
for
for
plugins.
It
allows
for
plug
in
management,
so
kubernetes
is
not
going
to
be
managing
an
index
of
plugins,
but
we
will
provide
the
tools
so
that
organizations
who
do
want
to
to
manage
plugins
will
will
be
able
to
do
that.
G
If
there's
any
questions
related
to
to
crew,
please
contact
Ahmet,
okay,
so,
as
I
mentioned
before
we're
happy
to
announce
that
who
that
apply
has
moved
to
the
server
side.
So
here's
a
rundown
of
all
of
the
the
movements
that
have
happened
in
regards
to
this
back
in
113
kook
little
diff
and
the
server
side
dry
run
already
went
to
beta
and
so
there's
here's
a
link
for
the
blog
to
the
diff,
as
I
mentioned
before.
G
Okay,
so
we
have
some
brand
new
coop
cuddle
documentation.
This
emphasizes
declarative
management
of
your
your
apps
or
your
deployments,
but
it
covers
all
of
coop
cuddle
functionality,
we're
soliciting
feedback
now
on
this
new
documentation.
So
please
have
a
look,
and
especially
you
should
funnel
your
feedback
to
to
fill
wet
rock
for
this.
G
G
A
H
A
B
A
A
H
A
H
H
So
sick
network,
we
cover
all
of
the
networking
associated
with
kubernetes,
so
this
includes
pod
networking
CNI
service
discovery,
which
is
DNS
load
balancing
services,
so
these
are
resources,
service,
endpoint,
ingress
and
then
network
security
in
network
policy
and
other
miscellaneous
network
related
things
not
in
this
list.
In
terms
of
things
that
happened
so
next
slide.
Please
so
stuff
that
happened.
Ingress
v1
was
moved
into
the
network,
API
group,
but
still
beta.
We
did
this
move
in
114.
So
please
update
your
code.
H
References
there'll
be
quite
a
few
cycles,
as
outlined
in
the
cap,
in
terms
of
deleting
it
from
the
extensions
group,
but
if
everyone
had
updates
their
references,
it
would
help
sort
of
clean
that
dependency
up.
We
spent
quite
a
bit
of
time
resolving
wrinkles,
with
Windows
conformance
as
part
of
the
networking
aspect
of
conformance.
We
have
a
cap
in
flight
for
cleanup
of
cube
proxy
cleanup,
behavior
I'm
being
a
little
too
cute
in
these
sentences.
Here,
also
as
part
of
joint
work
with
API
machinery,
we
have
a
network
proxy
kept.
This,
basically
is
it's
interesting.
H
The
k3
yes
presentation
have
this
reverse
tunnel.
This
is
kind
of
adding
that
API
into
the
API
server
officially
as
part
of
kubernetes.
Please
take
a
look
if
you're
interested.
This
replaces
the
ssh
tunnels
code
that
exists.
Finally,
the
pod
custom
dns
feature
went
GA
in
the
past
three
months
now
in
terms
of
what
we're
looking
at
for
the
next
three
months
is.
First,
we
have
a
Q
Khan
Barcelona,
a
deep
dive
session
on
networking,
so
please
attend.
H
If
you're
going
to
be
there,
no
local
DNS
is
going
to
go
to
beta
and
we
got
some
pretty
good
feedback
regarding
the
need
for
hae
and
other
features
and
we're
gonna
try
to
get
that
into
115.
Also
in
115
is
sort
of
figuring
out
ingress,
v1
scope
and
GA
plans.
This
basically
is
going
to
build
on
moving
the
ingress
out
of
the
network,
api
group
and
then
towards
GA,
and
then
there's
work
on
finally
having
service
finalizer
x',
which
is
basically
when
a
service
is
deleted.
H
The
associated
cloud
provider
resources
are
also
deleted
and
then
to
note
is
that
the
115
planning
is
ongoing.
So
please,
if
you
have
something
that
you
would
like
to
see
happen
in
115
or
maybe
116,
please
come
to
cig
network
and
put
some
thing
on
the
agenda
and
then
things
that
keep
happening
is.
We
are
running
above
triage
in
every
session.
A
Alright,
thank
you
for
attending
this
week's
meeting
before
we
wrap
up
I
want
to
pass
on
some
of
the
shirts
we've
had.
So
we
have
a
shout
out
for
oh
boy.
Reading
slack
names,
it's
not
great
rollin
Frick
for
consistently
stepping
up
to
review
PRS
in
communities.
Org
and
other
Compubox
repos
showed
up
Justin
SB
for
working
on
the
v1
elf
one
released
cluster
API
I
had
a
showed
up
for
liggett
for
the
huge
amount
of
contributor
and
development
questions
he's
been
answering
all
around
the
community.
A
We
have
a
shout
out
for
Leah
around
the
work
on
documenting
and
managing
cluster
API
use
cases,
and
this
week
our
top
10
Stack
Overflow
users
are
all
right.
Well,
I
can't
pronounce
this
list
I'm
sorry,
but
you
can
read
them
on
the
meeting
notes.
So
that's
it
for
this
week,
thanks
everyone
for
joining.