►
From YouTube: Kubernetes Community Meeting 20190606
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
Community
update
for
June
6,
so
we
have
the
kubernetes
birthday
coming
out.
So
there's
lots
of
events
going
on
this
week.
We
have
people
taking
over
the
Twitter
feed
and
so
on
before
we
get
into
the
updates
and
demos
for
the
week,
just
a
reminder
that
we
are
following
the
CNCs
could
have
conduct
so
please
be
respectful
to
everyone
else
on
the
call
all
right.
First
off,
we
have
a
demo
of
tube
one
to
demonstrate
lifecycle
management
for
high
availability,
ranae's
clusters.
B
Hello.
Everyone
thanks
for
the
introduction.
Today,
we
will
see
Cube
on
a
tool
for
class
the
lifecycle
management.
I
am
our
College,
a
software
developer
heads
will
see
and
the
computer
science
route
to
the
University
of
Belgrade.
You
can
find
me
on
Twitter,
github
and
Cuban
at
this
flag
as
excellently.
B
So
what
is
Cuba
Cuban
is
an
open
source
tool
from
energy
Cuban
ethics
class.
The
lifecycle
it
installs
improvisation
is
kubernetes
upgrades
to
an
uber
minor
or
batteries,
and
our
provision
is
the
cluster
by
young
promise.
Uniques.
That
means
I'm
doing
the
work
done
by
Cuban
like
destroyed
work,
notes,
I'm,
promising,
Cuban
its
itself
and
remove
equipment
is
my
choice:
Cuba
works
on
the
most
popular
cloud
providers
on
on-premises
and
on
bare
better.
It
is
focusing
on
high
available
cluster
support,
the
communities
one
point
sardine
and
no.
B
We
are
often
us
why
we
built
Cuban
when
we
have
so
many
awesome
tools,
Chadwicks
cross
the
lifecycle,
and
one
of
the
reasons
is
that
humanities
brought
us
a
new
way
for
managing
our
workload.
We
are
doing
in
a
cloud
native
that
great
debate,
but
many
to
kubernetes
classes
can
still
be
a
hard
task,
so
we
wanted
to
apply.
Some
of
them
is
a
swirl
managing
barcode
to
cross
and
in
a
search
for
a
future
couple
solutions
we
decide
to
build
human
human
uses.
B
There
are
technologies
to
bring
many
features
in
an
easy
to
preserve
manner
like
cube,
ATM
q,
magnetic
machine
controller,
which
is
open
source
cluster
API
implementation,
as
well
as
custom,
API
itself,
Cuban
Greeks
decorate
across
the
representation.
All
classes
are
represented
in
a
form
of
a
configuration
manifest.
Such
configuration
manifests
can
be
used
to
create
clusters
and
many
similar
clusters,
so
you
can
easily
reproduce.
Then
you
can
share
across
the
manifest
video
calling
same
version
control
systems
and
more
when
you
were
promising
as
the
cluster.
It
is
ready
to
use.
B
We
deploy
the
CNI
plugin
and
there
is
a
choice
between
channel
which
is
basically
final,
cross,
Chronicle
and
beam
net,
which
support
some
advanced
features
such
as
encryption
cubic,
an
optional
coffee
group.
Many
features
like
for
security
policy,
dynamic,
a
little
packet
metric
server
and
more
beside
that.
It
is
possible
to
integrate
cube
one.
We
differ
such
a
provisioning
tools
like
terraform,
ansible,
CloudFormation
and
out
of
box.
There
is
integration
for
Tanna
form,
so
you
can
source
tariffs
or
state
to
learn
on
both
instances.
B
Committees
will
be
installed
and
configured
cuban
is
supposed
to
work
on
any
provider
equally
on
on-premises
enter
the
bare
matter,
but
officially
supported
provide.
This
enjoy
additional
features,
such
as
support
for
managing
working
nodes,
use,
equipment,
machine
controller
automatically,
deploy
call
for
a
dispatch,
few
features
like
external
call,
controller
manager
or,
if
there's
additional
configuration
needed
for
such
cloud
provider
as
far
as
to
use
their
form
integration
to
source
information
about
it.
Retention
from
the
state
officially
supported
providers
for
now
include
ABS,
GC
digitalocean.
B
Yes,
sir
racket
OpenStack
in
VMware,
vSphere,
microsoft
azure
will
be
supported
as
of
the
upcoming
0.9
release,
so
how
it
works.
How
we
can
use
cube
on
the
first
step
is
to
create
instances
and
if
a
structure
to
be
used
by
cuban
x
q
Murr
comes
with
example,
there
are
four
scripts
that
can
be
used
to
get
started
in
the
infrastructure.
B
Then
you
need
to
build.
Cuba
configuration
manifest,
which
describes
the
desired
cluster
like
what
Cuban
inertia
will
be
installed.
What
machines
will
be
used,
how
the
customer
will
promise
unit?
What
features
will
be
activated
at
coffee,
good
and
more
then,
just
on
Cuban
install
and
the
drawer
Union
cast.
B
So
let's
see
the
first
step,
and
this
is
Billy
Cuba
Krauser
manifest
now
on
the
left
side,
we
have
a
minimal
manifest
that
you
can
use.
If
you
use
terraform
integration
at
the
top
of
it,
we
have
API
version
and
kind
like
for
any
other
community
style
manifest.
Then
we
defined
version
and
we
defined
that
we
are
going
to
deploy
on
abs
for
itself
such
manifest.
You
can
deploy
using
cuban
system
coffee,
gamma
and
then
provide
meter
from
output,
which
is
this
example
code.
Tv
jason.
B
Such
output
is
generated
using
time
for
mouth
with
comment,
and
it
is
important
to
note
that
to
use
such
litigation,
it
is
aquaria
to
follow
the
template,
use
it
by
coupon
and
you
can
find
it
in
each
example.
It's
a
pattern
for
script,
but
if
you
don't
want
to
use
their
form,
you
can
use
any
other
tool,
but
you're
going
to
need
to
provide
information
about
host
an
API
endpoint
yourself,
like
public
addresses.
B
Now
the
output
is
little
bit
cut,
so
you
can
see
the
size,
but
there
is
much
more
like
private,
a
disease
than
SSH
information.
Ed
more
such
manifests.
You
just
apply
to
use
a
Cuban,
install
and
be
applied
to
the
coffee
manifest
now,
let's
switch
to
the
demo
for
I
can
recorded
a
quick,
a
schema
that
is
going
to
show
how
it
works
and
how
you
can
use
cuban.
This
is
because
the
promise
of
a
process
to
extract
between
5
and
10
minutes,
depending
on
the
code
provider
in
the
bio
mint.
B
So
we
don't
have
time
to
show
it
now,
but
this
is
little
bit
speed-up,
so
you
can
still
see
it.
The
first
thing
we
do
is
to
promise
me.
First,
we
said
some
terrible
permits
like
number
of
gotta
play,
notes,
cause
the
name
and
more.
We
run
tariffs
or
plan
to
see
our
or
changes.
Okay,
then
time
from
apply,
it
usually
takes
several
minutes,
and
after
it
is
done,
it
remote
put
the
turn
from
output
that
we,
as
producing
terraformers,
would
comment
visit
on
this
template
defined
it
in
our
duty
as
far
as
measurement.
B
Now
that
we
have
time
from
output,
we
create
the
coffee
manifest,
which
is
very
similar
to
the
one
we
showed.
This
example
uses
digital
ocean
and
we
were
on
Cuban
Easter
config
I'm
a
bit
iffy
Jason
it
that
my
operating
system
and
everything
needed
generates
cube.
Atm
configuration
file
certificate,
it
is
for
AJ
clustered.
The
Cyclonus
between
cotterpin
notes,
then
run
cube,
ATM
Anita,
baby
I'm,
trying
to
join
kata
play,
notes
and
then
deploy
CNI
machine
controller
and
create
work
cuts
to
be
able
to
use
the
cluster.
You
need
to
export
with
tube
config
environment.
B
Variable
Q
coffee
file
is
divided
automatically.
It
takes
some
time
for
nose
to
appear
there,
candle
by
Cuba
Matic
machine
hotel,
and
this
works
like
a
disco,
so
API
implementation
working
as
a
packet
by
the
Machine
deployment
object,
which
works
like
deployment
object.
It
will
create
machine
set
in
machines
and
then
on
instances
in
the
cloud.
But
what
is
important
here
is
that
you
can
use
cube
Caudill
to
manage
your
working
notes,
create
a
new
ones
edit
existing
one
like
you
can
add
it
to
upgrade
them.
B
Also,
to
an
upgrade
comment
is
taking
care
of
that
automatically.
You
can
increase
the
decrease
number
of
nodes
here.
We
use
queue.
Cutter
scale
comment:
debt
increases
number
of
replicas
first
to
5.
After
some
time,
each
new
replicas
will
appear,
but
you
can
so
sales
something
like
replica
zero.
So
you
delete
old
volcanoes
in
that
machine
employment.
B
If
you
want
to
learn
more
about
machine
couturier,
you
can
see
you
can
find
it
in
cumulative
repository,
but
also
if
you
want
to
learn
more
about
custom
API
itself,
there
is
a
cluster
API
repository
in
the
Kuban
86
organization.
Also
they
have
a
very
nice
gear
book
that
you
can
take
a
look
at
it
and
learn
more
about
it.
B
You
can
find
Q
1
on
github
in
the
Cuban
medical
innovation.
You
can
follow
me
on
Twitter,
X,
Moody
and
follow
route
C
for
updates
about
Cuban
blog
post.
We
have
a
more
returned.
Another
blog
was
that
discovery.
One
is
some
more.
It
is
also.
Last
week
we
have
done
a
CS
CF
webinar,
which
shows
how
human
books
in
detail
it.
Also
we
have
a
cycle
geomatics
like
and
you
can
find
the
heel
ring
to
join.
If
you
want
to
join
Cuba
channel,
ask
questions
or
chat.
C
A
A
D
Yes,
hello,
everyone,
so
we
are
nearing
the
end
of
week,
9
of
our
release
cycle,
so
things
that
happened
this
week.
We
have
our
docks,
PR
review,
milestone
and
we're
hoping
to
make
sure
that
all
of
the
review
is
going
to
be
done
by
June
10th
of
next
week.
So
while
the
docks
PRS
came
in
on
Tuesday,
and
then
we
also
cut
our
second
beta
yesterday
and
then
coming
up
next
week,
we'll
be
cutting
our
first
RC
on
June
11th.
D
D
D
Additionally
for
116,
we've
started
pushing
everyone
to
think
about
secession,
I've
nominated
Lachlan
Evanson
to
be
the
1/16
release,
lead
and
he's
accepted,
so
he
will
be
the
release
lead
for
the
1/16
release.
So
everyone
look
forward
to
hearing
him,
give
the
updates
every
single
week
at
the
community
call
instead
of
myself
and
I.
D
A
All
right
heading
on,
we
don't
have
a
contributor
tip
of
the
week.
Unfortunately,
there
was
a
last-minute
snag
with
some
of
our
data.
I,
don't
also
see
a
cap
of
the
week,
so
you're
spared
that
moving
on
to
sig
updates,
we
have
first
off
a
update
from
sig
multi
cluster.
So
are
we
ready
to
roll
that?
Yes,.
E
E
Are
correct,
I
am
muted.
Thank
you,
so
there's
two
main
efforts
that
I'm
giving
an
update
on
today.
I'll
start
with
the
second
one
first,
which
is
cluster
identity.
This
is
a
very
recently
started
effort
to
develop
a
concept
of
a
cluster
identifier
that
is
more
durable
than
say
the
addresses
of
clusters,
API
endpoints,
the
certificates
that
the
cluster
may
use,
etc.
These
are,
as
I
said,
just
getting
started.
E
The
the
current
state
that
it
is
in
right
now
is
that
there
is
a
proposal
for
these
are
the
things
to
agree
on
about
this
problem,
which
Adrian
from
Google
has
generously
written
up
I
realized
shortly
before
it
was
my
turn
to
speak.
That
I
don't
have
a
link
to
this
in
the
document,
but
I
will
make
sure
that
a
link
is
plumbed
into
that.
E
Secondly,
the
other
project
that
I'm
giving
an
update
on
today
is
called
cube
fed.
You
may
have
previously
heard
of
this
project
with
the
name
Federation
v2
we've
recently
renamed
it,
and
if
you
could
advance
the
slide,
that
would
be
great
George
and
maybe
another
one,
because
that's
not
an
oh
there.
We
go
already
covered
that
one
about
one
more
so
cube.
E
If
we
could
advance
the
slide
one
more
so,
we've
we're
in
a
state
where
we
have
a
initial
release
candidate
for
our
first
beta
release.
We're
currently
working
through
some
API
mechanics
around.
How
upgrades
will
work,
but
our
planets
that
have
a
initial
beta
release
soon,
with
potentially
a
ga
later
this
year,
if
we
could
advance
one
more
in
our
last
cycle,
since
we
last
gave
a
sig
update,
we've
done
a
fairly
significant
overhaul
of
the
API
in
the
sense
of
the
the
current
API
is
far
simpler
to
to
summarize
it
very
briefly
there.
E
One
of
the
fundamental
operations
of
cube
fed
is
to
enable
the
Federation
capability
for
an
arbitrary
API
resource,
whether
that
is
part
of
the
kubernetes
api
proper
or
a
CR
d
that
you've
created
that
you
want
to
spread
to
multiple
clusters
in
the
scheme,
for
how
we
did
that
at
the
the
point
where
we
gave
the
last
sick
update,
you
would
run
this
enable
operation
and
you'd
get
three
different
api
resources
that
captured
the
different
dimensions
of.
What's
the
essential
definition
of
a
resource
that
I'd
like
to
spread.
E
E
For
you,
and
also
a
CLI
command
that
will
convert
normal
resources
into
their
Federation
equivalents
and
if
we
could
advance
one
slide,
I'll
say
what
I
think
is
probably
the
most
exciting
thing
that
we've
done
in
the
last
cycle
is
that,
as
we've
collapsed,
these
different
API
resources
that
represent
the
different
elements
of
the
Federation
API
into
a
single
one.
We've
also
done
a
lot
of
work
to
make
the
status
of
that
single
API
resource,
meaningful
and
useful
to
you.
E
So
to
just
talk
through
a
quick
example
say
that
I
would
like
to
spread
kubernetes
deployment
resources
over
multiple
clusters.
I
would
enable
that
capability
for
deployments,
and
that
would
give
me
a
new
API
surface
called
federated
deployment
and
I
can
use
that
API
surface
to
say.
Here's
the
essential
definition
of
a
deployment
I'd
like
to
spread
I
want
it
to
go
to
clusters
a
B
and
C
and
in
cluster
C
I
would
like
to
have
10
replicas
instead
of
5.
E
What
you're
able
to
see
now
in
the
status
of
that
federated
deployment
resources
exactly
where
has
it
been
deployed
to?
Are
there
any
problems
propagating
the
resource,
etc?
And
then,
finally,
also
very
exciting
to
me
as
someone
that
wants
api's
in
kubernetes
to
work
like
one
another,
we've
also
done
a
lot
of
work
to
make
the
API
surface
both
that
configures
cuvette
and
also
that's
generated
by
cube,
fed
more
compliant
with
uber
Nettie's
api
conventions.
E
How
about
the
next
slide
in
our
upcoming
cycle?
I
foresee
us
doing
more
work
to
ease
the
barrier
to
entry
and
and
make
adoption
of
the
cube
bed
tool
to
try
it
out
to
use
it
etc
even
lower
than
it
is
today,
and
at
that
point
or
after
the
point
of
our
initial
beta
release,
I
think
that
will
also
be
in
a
position
to
develop
higher-level
api's
as
part
of
that
mission
to
be
useful
for
more
interesting
things
like
multi
geo,
apps
or
automated
failover
across
clusters.
E
Next
slide,
please.
So
how
can
you
contribute?
We
would
love
to
have
additional
contributors.
The
the
project
is
under
kubernetes
SIG's.
Under
the
name
Cupid,
we
are
especially
interested
in
folks
that
may
have
an
interest
trying
it
out
and
giving
us
feedback
as
that's
the
best
time
to
get
that
is
before
you
take
an
api
to
beta
and
our
maintainer
x'
are
interested
in
providing
mentorship.
So
please
do
check
us
out.
E
The
work
on
cuvette
is
happening
in
the
cube,
fed
working
group
which
meets
every
Wednesday
and
again,
there
is
more
specific
information
about
that
on
our
six
page,
so
I
think
I
will
end
it
there.
Thank
you
very
much,
and
let
me
know
if
there
are
any
questions.
A
F
Is
this
a
little
bit
better?
Oh
yeah,
all
right,
no
problem
all
right!
Well,
thanks
everyone!
So
I'm,
Patrick,
Lang
I'm,
one
of
the
co-chairs
for
cig
Windows
or
other
chair
Michael's,
also
on
a
call
as
well,
but
I'll
give
a
quick
update
of
where
we're
at.
F
There
we
go
now.
My
browser
works
all
right,
so
so,
looking
back
you
know,
1.14
was
the
release
where
we
graduated
windows
node
support
to
stable,
and
that
was
really
you
know
pretty
much
a
big
culmination
of
of
a
large
stability
effort.
So
that
way
we
could
get
everything
in
a
good
supportable
state
and
make
sure
that
we
were
keeping
things
working
consistent
going
forward.
F
So
the
first
thing
that
we're
working
on
was
support
for
using
cube
ATM
to
be
able
to
join
Windows
nodes
to
an
existing
cluster
I'll,
actually
give
a
some
more
details
in
a
short
demo
of
that.
In
a
minute,
then,
the
other
thing
that
we
did
was
there's
four
customers
that
are
using
Windows
Active
Directory
for
identify
for
basically
providing
an
identity
between
an
application
and
another
service.
An
example
would
be
if
they
wrote
something
in
dotnet
and
they
want
to
connect
to
a
sequel
database.
F
They
were
frequently
using,
what's
called
a
managed
service
account,
and
so
we
had
an
alpha
annotation
available
for
that,
but
for
version
15
we
went
through
the
API
review
process
and
we
actually
created
a
new
Windows
security
context
options.
That's
available
there
under
the
container
spec
within
the
kubernetes
api.
The
first
field
we
added
in
there
was
the
one
needed
to
enable
I'm
group
managed
service
accounts
and
then
the
next
one
that
we're
I'm,
currently
finishing
up,
is
being
able
to
run
as
a
particular
username
within
the
Windows
container.
F
This
is
a
bit
different
from
from
Linux,
because
in
Linux
you
always
use
the
UID
and
GID
Windows
doesn't
have
those.
The
name
is
actually
converted
to
a
binary
identifier
behind
the
scenes,
and
so
we
were
implementing
that.
So
that
way,
if
you've
got
a
container,
that's
got.
You
know
a
non
privileged
user
plus
a
privileged
account
and
you
can
correctly
create
an
exact
things
as
those
users.
F
The
other
thing
is,
we
had
a
great
contribution
from
from
from
Ben
from
Ben
moss,
but
one
of
the
things
we
realized
was
that
the
way
cube
control
port
forward
was
working.
It
was
actually
using
a
streaming
operation
that
was
actually
using
NS
enter
and
so
Kat,
which
was
you
know.
Those
are
both
Linux
specific
concepts,
and
so
we
found
a
workaround
where
we
can
make
something
that
was
like
so
cat
been
called
a
twin
cat,
but
that
lets
us
actually
start
a
port
forwarding
little
demon
inside
the
pod.
F
So
that
way
we
can
use
the
cube
control
port
forward
command
going
forward
in
fifteen,
and
so
we'll
be
updating
the
pause
image.
But
that's
something
that
you
know
we'd
love
to
hear
feedback
on
before
you
know
if
it's
working
for
from
windows
developers
and
then
you'll
be
on
that
we've
been
started
to
work
at
how
we
can
take
advantage
of
some
of
the
network
updates
that
are
there
and
Windows
Server
1903
and
there's
been
some
improvements
around
direct
server
return.
F
F
So
this
is
going
to
make
it
much
much
easier
to
be
able
to
run
conformance
on
arm
and
also
on
Windows
workloads
in
the
future
and
also
speed
up
the
conformance
passes
in
general,
because
they're
already
shaving
off,
you
know
gigabytes
of
images
that
need
to
be
downloaded.
So
that's
that's
something
that
we'll
be
seeing
a
lot
of
benefits
coming
from
soon.
F
So
it
could
detect
the
right
CRI
and
get
the
path
to
that,
as
well
as
work
with
the
windows
init
system,
to
make
sure
that
the
right
dependent
services
we're
started,
and
so
that
fits
the
you
know,
initial
goal
of
cube,
ATM,
focusing
on
just
basically
bootstrapping
and
joining
the
the
cubelet
to
the
existing
deployment.
That
was
run
with
cube,
ATM
and
net.
F
So
just
to
give
you
a
quick
demo
here,
I've
got
a
couple
of
VMs
running
on
my
laptop
here,
but
I've
got
currently
to
two
linux
nodes
running
and
those
are
both
running
on
sent
to
us.
But
what
I
want
to
do
here
is
actually
go
ahead
and
get
a
new
cube.
Atm
joined
token,
so
I've
got
that
and
oh
gosh
go
away
kitty.
Sorry,
kiddies,
stealing
stuff
off
my
desk,
so
going
over
to
the
windows.
F
Now
we
can
see
that,
in
addition
to
the
two
links
notes
I've
already,
there
we've
got
the
windows
node.
There
there's
still
a
couple
things
that
we're
working
out
for
on
the
installation.
Experience
though,
and
that's
why
that
note
is
not
ready.
So
the
next
thing
that
were
that
we
need
to
work
on
is
sort
of
finishing
up
how
we're
doing
CNI
deployments
so
on
Linux.
Those
are
done
through
daemon
sets
which
deploy
privileged
containers.
F
F
What's
getting
cried
container
D
your
the
goal
for
the
container
D
project
was
to
fully
support
Windows
with
the
1.3
release,
so
that's
still
ongoing
and
I'm
we're
basically
working
together
with
them
so
that
we
can
get
it
a
repeatable
test
pass
up
and
running
on
test
grid.
Unfortunately,
that's
that's
still
not
complete
yet
and
I'll
be
carrying
that
work.
On
over
into
16
and
and
as
I
mentioned,
you
know
we're
continuing
to
work
on
the
install
experience.
F
We've
got
the
key
QA
team
work
going,
but
we're
still
working
to
make
a
few
more
scripts
to
make
it
a
little
bit
easier
to
get
those
nodes
up
and
running
and
then
continue
working
on
the
run
as
user
name
support
and
so
sort
of
beyond
that,
we're
going
to
continue
working
on
promoting
effete
features
like
GMS,
a
and
stuff
beta
and
stable,
and
then
working
more
on
some
of
the
ecosystem.
Plug-Ins
for
CNI
and
storage.
F
Things
are
still
a
bit
early
there,
but
we
do
have
working
plugins
from
multiple
cloud
providers.
The
tests
for
those
are
already
visible
on
test
grid,
but
we've
got
some
storage
plugins
and
it
works
as
well
and
there's
actually
a
great
presentation
a
few
weeks
ago,
at
docker
con
in
Barcelona.
If
you
want
to
know
a
little
bit
more
about
how
some
of
those
plugins
are
are
proceeding,
so
I
think
that's
pretty
much
it
for
today.
So
if
you
want
to
join
you
know,
we've
got
our
weekly
cig
meeting.
A
A
Tim
pepper
has
a
shout
out
for
Michelle
o
around
every
time.
There's
been
at
least
walking.
Tests
on
storage
and
shells
code
has
been
hours
ahead
of
him.
Issues
been
tree.
Line-Of-Site
it's
been
fixed,
so
sounds
like
awesome
work
going
on
there,
Lee
Capelli
and
Vince
have
props
to
be
hanison
for
taking
stellar
notes
at
lightning
speed
for
a
cluster
lifecycle
and
cluster
API
meetings.
A
That
sounds
awesome
makes
things
way
more
accessible
for
those
who
can't
make
the
meeting
and
lastly,
I
wanted
to
give
a
shout
out
to
sick
and
Trebek's
I
pitch
in
from
time
to
time,
but
it's
nothing
compared
to
the
work
that
goes
on
every
day
to
just
keep
everything
running
and
make
sure
the
developers
can
get
the
jobs
done.
So
thank
you.
Everyone
and
I
believe
that
is
it
for
this
meeting.
So
thank
you
for
joining
and
we'll
see
you
in
another
week.